00:00:00.001 Started by upstream project "autotest-per-patch" build number 132308 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.080 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.081 The recommended git tool is: git 00:00:00.081 using credential 00000000-0000-0000-0000-000000000002 00:00:00.089 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.101 Fetching changes from the remote Git repository 00:00:00.104 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.116 Using shallow fetch with depth 1 00:00:00.116 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.117 > git --version # timeout=10 00:00:00.130 > git --version # 'git version 2.39.2' 00:00:00.130 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.145 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.145 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:02.288 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:02.303 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:02.315 Checking out Revision b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf (FETCH_HEAD) 00:00:02.315 > git config core.sparsecheckout # timeout=10 00:00:02.326 > git read-tree -mu HEAD # timeout=10 00:00:02.341 > git checkout -f b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=5 00:00:02.361 Commit message: "jenkins/jjb-config: Ignore OS version mismatch under freebsd" 00:00:02.361 > git rev-list --no-walk b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=10 00:00:02.479 [Pipeline] Start of Pipeline 00:00:02.497 [Pipeline] library 00:00:02.499 Loading library shm_lib@master 00:00:02.499 Library shm_lib@master is cached. Copying from home. 00:00:02.518 [Pipeline] node 00:00:02.530 Running on VM-host-WFP7 in /var/jenkins/workspace/raid-vg-autotest_2 00:00:02.532 [Pipeline] { 00:00:02.542 [Pipeline] catchError 00:00:02.544 [Pipeline] { 00:00:02.559 [Pipeline] wrap 00:00:02.568 [Pipeline] { 00:00:02.581 [Pipeline] stage 00:00:02.583 [Pipeline] { (Prologue) 00:00:02.602 [Pipeline] echo 00:00:02.603 Node: VM-host-WFP7 00:00:02.609 [Pipeline] cleanWs 00:00:02.619 [WS-CLEANUP] Deleting project workspace... 00:00:02.619 [WS-CLEANUP] Deferred wipeout is used... 00:00:02.625 [WS-CLEANUP] done 00:00:03.133 [Pipeline] setCustomBuildProperty 00:00:03.205 [Pipeline] httpRequest 00:00:03.531 [Pipeline] echo 00:00:03.533 Sorcerer 10.211.164.20 is alive 00:00:03.543 [Pipeline] retry 00:00:03.546 [Pipeline] { 00:00:03.565 [Pipeline] httpRequest 00:00:03.569 HttpMethod: GET 00:00:03.570 URL: http://10.211.164.20/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:03.571 Sending request to url: http://10.211.164.20/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:03.572 Response Code: HTTP/1.1 200 OK 00:00:03.572 Success: Status code 200 is in the accepted range: 200,404 00:00:03.573 Saving response body to /var/jenkins/workspace/raid-vg-autotest_2/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:03.718 [Pipeline] } 00:00:03.732 [Pipeline] // retry 00:00:03.740 [Pipeline] sh 00:00:04.020 + tar --no-same-owner -xf jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:04.033 [Pipeline] httpRequest 00:00:04.582 [Pipeline] echo 00:00:04.584 Sorcerer 10.211.164.20 is alive 00:00:04.591 [Pipeline] retry 00:00:04.593 [Pipeline] { 00:00:04.603 [Pipeline] httpRequest 00:00:04.608 HttpMethod: GET 00:00:04.608 URL: http://10.211.164.20/packages/spdk_ca87521f7a945a24d4a88af4aa495b55d2de10da.tar.gz 00:00:04.609 Sending request to url: http://10.211.164.20/packages/spdk_ca87521f7a945a24d4a88af4aa495b55d2de10da.tar.gz 00:00:04.610 Response Code: HTTP/1.1 200 OK 00:00:04.610 Success: Status code 200 is in the accepted range: 200,404 00:00:04.611 Saving response body to /var/jenkins/workspace/raid-vg-autotest_2/spdk_ca87521f7a945a24d4a88af4aa495b55d2de10da.tar.gz 00:00:25.502 [Pipeline] } 00:00:25.520 [Pipeline] // retry 00:00:25.527 [Pipeline] sh 00:00:25.812 + tar --no-same-owner -xf spdk_ca87521f7a945a24d4a88af4aa495b55d2de10da.tar.gz 00:00:28.363 [Pipeline] sh 00:00:28.649 + git -C spdk log --oneline -n5 00:00:28.650 ca87521f7 test/nvme/interrupt: Verify pre|post IO cpu load 00:00:28.650 83e8405e4 nvmf/fc: Qpair disconnect callback: Serialize FC delete connection & close qpair process 00:00:28.650 0eab4c6fb nvmf/fc: Validate the ctrlr pointer inside nvmf_fc_req_bdev_abort() 00:00:28.650 4bcab9fb9 correct kick for CQ full case 00:00:28.650 8531656d3 test/nvmf: Interrupt test for local pcie nvme device 00:00:28.698 [Pipeline] writeFile 00:00:28.714 [Pipeline] sh 00:00:29.001 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:00:29.013 [Pipeline] sh 00:00:29.295 + cat autorun-spdk.conf 00:00:29.295 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:29.295 SPDK_RUN_ASAN=1 00:00:29.295 SPDK_RUN_UBSAN=1 00:00:29.295 SPDK_TEST_RAID=1 00:00:29.295 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:29.302 RUN_NIGHTLY=0 00:00:29.304 [Pipeline] } 00:00:29.318 [Pipeline] // stage 00:00:29.333 [Pipeline] stage 00:00:29.335 [Pipeline] { (Run VM) 00:00:29.347 [Pipeline] sh 00:00:29.630 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:00:29.630 + echo 'Start stage prepare_nvme.sh' 00:00:29.630 Start stage prepare_nvme.sh 00:00:29.630 + [[ -n 7 ]] 00:00:29.630 + disk_prefix=ex7 00:00:29.630 + [[ -n /var/jenkins/workspace/raid-vg-autotest_2 ]] 00:00:29.630 + [[ -e /var/jenkins/workspace/raid-vg-autotest_2/autorun-spdk.conf ]] 00:00:29.630 + source /var/jenkins/workspace/raid-vg-autotest_2/autorun-spdk.conf 00:00:29.630 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:29.630 ++ SPDK_RUN_ASAN=1 00:00:29.630 ++ SPDK_RUN_UBSAN=1 00:00:29.630 ++ SPDK_TEST_RAID=1 00:00:29.630 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:29.630 ++ RUN_NIGHTLY=0 00:00:29.630 + cd /var/jenkins/workspace/raid-vg-autotest_2 00:00:29.630 + nvme_files=() 00:00:29.630 + declare -A nvme_files 00:00:29.630 + backend_dir=/var/lib/libvirt/images/backends 00:00:29.630 + nvme_files['nvme.img']=5G 00:00:29.630 + nvme_files['nvme-cmb.img']=5G 00:00:29.630 + nvme_files['nvme-multi0.img']=4G 00:00:29.630 + nvme_files['nvme-multi1.img']=4G 00:00:29.630 + nvme_files['nvme-multi2.img']=4G 00:00:29.630 + nvme_files['nvme-openstack.img']=8G 00:00:29.630 + nvme_files['nvme-zns.img']=5G 00:00:29.630 + (( SPDK_TEST_NVME_PMR == 1 )) 00:00:29.630 + (( SPDK_TEST_FTL == 1 )) 00:00:29.630 + (( SPDK_TEST_NVME_FDP == 1 )) 00:00:29.630 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:00:29.630 + for nvme in "${!nvme_files[@]}" 00:00:29.630 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi2.img -s 4G 00:00:29.630 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:00:29.630 + for nvme in "${!nvme_files[@]}" 00:00:29.630 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-cmb.img -s 5G 00:00:29.630 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:00:29.630 + for nvme in "${!nvme_files[@]}" 00:00:29.630 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-openstack.img -s 8G 00:00:29.630 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:00:29.630 + for nvme in "${!nvme_files[@]}" 00:00:29.630 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-zns.img -s 5G 00:00:29.630 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:00:29.630 + for nvme in "${!nvme_files[@]}" 00:00:29.630 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi1.img -s 4G 00:00:29.630 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:00:29.630 + for nvme in "${!nvme_files[@]}" 00:00:29.630 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi0.img -s 4G 00:00:29.630 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:00:29.630 + for nvme in "${!nvme_files[@]}" 00:00:29.630 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme.img -s 5G 00:00:29.630 Formatting '/var/lib/libvirt/images/backends/ex7-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:00:29.889 ++ sudo grep -rl ex7-nvme.img /etc/libvirt/qemu 00:00:29.889 + echo 'End stage prepare_nvme.sh' 00:00:29.889 End stage prepare_nvme.sh 00:00:29.901 [Pipeline] sh 00:00:30.183 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:00:30.183 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 -b /var/lib/libvirt/images/backends/ex7-nvme.img -b /var/lib/libvirt/images/backends/ex7-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex7-nvme-multi1.img:/var/lib/libvirt/images/backends/ex7-nvme-multi2.img -H -a -v -f fedora39 00:00:30.183 00:00:30.183 DIR=/var/jenkins/workspace/raid-vg-autotest_2/spdk/scripts/vagrant 00:00:30.183 SPDK_DIR=/var/jenkins/workspace/raid-vg-autotest_2/spdk 00:00:30.183 VAGRANT_TARGET=/var/jenkins/workspace/raid-vg-autotest_2 00:00:30.183 HELP=0 00:00:30.183 DRY_RUN=0 00:00:30.183 NVME_FILE=/var/lib/libvirt/images/backends/ex7-nvme.img,/var/lib/libvirt/images/backends/ex7-nvme-multi0.img, 00:00:30.183 NVME_DISKS_TYPE=nvme,nvme, 00:00:30.183 NVME_AUTO_CREATE=0 00:00:30.183 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex7-nvme-multi1.img:/var/lib/libvirt/images/backends/ex7-nvme-multi2.img, 00:00:30.183 NVME_CMB=,, 00:00:30.183 NVME_PMR=,, 00:00:30.183 NVME_ZNS=,, 00:00:30.183 NVME_MS=,, 00:00:30.183 NVME_FDP=,, 00:00:30.183 SPDK_VAGRANT_DISTRO=fedora39 00:00:30.183 SPDK_VAGRANT_VMCPU=10 00:00:30.183 SPDK_VAGRANT_VMRAM=12288 00:00:30.183 SPDK_VAGRANT_PROVIDER=libvirt 00:00:30.183 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:00:30.183 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:00:30.183 SPDK_OPENSTACK_NETWORK=0 00:00:30.183 VAGRANT_PACKAGE_BOX=0 00:00:30.183 VAGRANTFILE=/var/jenkins/workspace/raid-vg-autotest_2/spdk/scripts/vagrant/Vagrantfile 00:00:30.183 FORCE_DISTRO=true 00:00:30.183 VAGRANT_BOX_VERSION= 00:00:30.183 EXTRA_VAGRANTFILES= 00:00:30.183 NIC_MODEL=virtio 00:00:30.183 00:00:30.183 mkdir: created directory '/var/jenkins/workspace/raid-vg-autotest_2/fedora39-libvirt' 00:00:30.183 /var/jenkins/workspace/raid-vg-autotest_2/fedora39-libvirt /var/jenkins/workspace/raid-vg-autotest_2 00:00:32.723 Bringing machine 'default' up with 'libvirt' provider... 00:00:32.723 ==> default: Creating image (snapshot of base box volume). 00:00:32.985 ==> default: Creating domain with the following settings... 00:00:32.985 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1731849021_8f06f09683dd13c4b127 00:00:32.985 ==> default: -- Domain type: kvm 00:00:32.985 ==> default: -- Cpus: 10 00:00:32.985 ==> default: -- Feature: acpi 00:00:32.985 ==> default: -- Feature: apic 00:00:32.985 ==> default: -- Feature: pae 00:00:32.985 ==> default: -- Memory: 12288M 00:00:32.985 ==> default: -- Memory Backing: hugepages: 00:00:32.985 ==> default: -- Management MAC: 00:00:32.985 ==> default: -- Loader: 00:00:32.985 ==> default: -- Nvram: 00:00:32.985 ==> default: -- Base box: spdk/fedora39 00:00:32.985 ==> default: -- Storage pool: default 00:00:32.985 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1731849021_8f06f09683dd13c4b127.img (20G) 00:00:32.985 ==> default: -- Volume Cache: default 00:00:32.985 ==> default: -- Kernel: 00:00:32.985 ==> default: -- Initrd: 00:00:32.985 ==> default: -- Graphics Type: vnc 00:00:32.985 ==> default: -- Graphics Port: -1 00:00:32.985 ==> default: -- Graphics IP: 127.0.0.1 00:00:32.985 ==> default: -- Graphics Password: Not defined 00:00:32.985 ==> default: -- Video Type: cirrus 00:00:32.985 ==> default: -- Video VRAM: 9216 00:00:32.985 ==> default: -- Sound Type: 00:00:32.985 ==> default: -- Keymap: en-us 00:00:32.985 ==> default: -- TPM Path: 00:00:32.985 ==> default: -- INPUT: type=mouse, bus=ps2 00:00:32.985 ==> default: -- Command line args: 00:00:32.985 ==> default: -> value=-device, 00:00:32.985 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:00:32.985 ==> default: -> value=-drive, 00:00:32.985 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme.img,if=none,id=nvme-0-drive0, 00:00:32.985 ==> default: -> value=-device, 00:00:32.985 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:32.985 ==> default: -> value=-device, 00:00:32.985 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:00:32.985 ==> default: -> value=-drive, 00:00:32.985 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:00:32.985 ==> default: -> value=-device, 00:00:32.985 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:32.985 ==> default: -> value=-drive, 00:00:32.985 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:00:32.985 ==> default: -> value=-device, 00:00:32.985 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:32.985 ==> default: -> value=-drive, 00:00:32.985 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:00:32.985 ==> default: -> value=-device, 00:00:32.985 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:32.985 ==> default: Creating shared folders metadata... 00:00:32.985 ==> default: Starting domain. 00:00:34.896 ==> default: Waiting for domain to get an IP address... 00:00:49.787 ==> default: Waiting for SSH to become available... 00:00:51.164 ==> default: Configuring and enabling network interfaces... 00:00:57.750 default: SSH address: 192.168.121.220:22 00:00:57.750 default: SSH username: vagrant 00:00:57.750 default: SSH auth method: private key 00:01:00.287 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest_2/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:08.429 ==> default: Mounting SSHFS shared folder... 00:01:11.011 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest_2/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:01:11.011 ==> default: Checking Mount.. 00:01:12.392 ==> default: Folder Successfully Mounted! 00:01:12.392 ==> default: Running provisioner: file... 00:01:13.342 default: ~/.gitconfig => .gitconfig 00:01:13.911 00:01:13.911 SUCCESS! 00:01:13.911 00:01:13.911 cd to /var/jenkins/workspace/raid-vg-autotest_2/fedora39-libvirt and type "vagrant ssh" to use. 00:01:13.911 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:13.911 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/raid-vg-autotest_2/fedora39-libvirt" to destroy all trace of vm. 00:01:13.911 00:01:13.920 [Pipeline] } 00:01:13.935 [Pipeline] // stage 00:01:13.945 [Pipeline] dir 00:01:13.945 Running in /var/jenkins/workspace/raid-vg-autotest_2/fedora39-libvirt 00:01:13.947 [Pipeline] { 00:01:13.959 [Pipeline] catchError 00:01:13.961 [Pipeline] { 00:01:13.973 [Pipeline] sh 00:01:14.255 + vagrant ssh-config --host vagrant 00:01:14.255 + sed -ne /^Host/,$p 00:01:14.255 + tee ssh_conf 00:01:16.790 Host vagrant 00:01:16.791 HostName 192.168.121.220 00:01:16.791 User vagrant 00:01:16.791 Port 22 00:01:16.791 UserKnownHostsFile /dev/null 00:01:16.791 StrictHostKeyChecking no 00:01:16.791 PasswordAuthentication no 00:01:16.791 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:01:16.791 IdentitiesOnly yes 00:01:16.791 LogLevel FATAL 00:01:16.791 ForwardAgent yes 00:01:16.791 ForwardX11 yes 00:01:16.791 00:01:16.804 [Pipeline] withEnv 00:01:16.805 [Pipeline] { 00:01:16.817 [Pipeline] sh 00:01:17.098 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:17.098 source /etc/os-release 00:01:17.098 [[ -e /image.version ]] && img=$(< /image.version) 00:01:17.098 # Minimal, systemd-like check. 00:01:17.098 if [[ -e /.dockerenv ]]; then 00:01:17.098 # Clear garbage from the node's name: 00:01:17.098 # agt-er_autotest_547-896 -> autotest_547-896 00:01:17.098 # $HOSTNAME is the actual container id 00:01:17.098 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:17.098 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:17.098 # We can assume this is a mount from a host where container is running, 00:01:17.098 # so fetch its hostname to easily identify the target swarm worker. 00:01:17.098 container="$(< /etc/hostname) ($agent)" 00:01:17.098 else 00:01:17.098 # Fallback 00:01:17.098 container=$agent 00:01:17.098 fi 00:01:17.098 fi 00:01:17.098 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:17.098 00:01:17.369 [Pipeline] } 00:01:17.385 [Pipeline] // withEnv 00:01:17.394 [Pipeline] setCustomBuildProperty 00:01:17.408 [Pipeline] stage 00:01:17.410 [Pipeline] { (Tests) 00:01:17.426 [Pipeline] sh 00:01:17.712 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:17.986 [Pipeline] sh 00:01:18.271 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:18.546 [Pipeline] timeout 00:01:18.546 Timeout set to expire in 1 hr 30 min 00:01:18.548 [Pipeline] { 00:01:18.563 [Pipeline] sh 00:01:18.851 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:01:19.421 HEAD is now at ca87521f7 test/nvme/interrupt: Verify pre|post IO cpu load 00:01:19.435 [Pipeline] sh 00:01:19.722 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:01:19.999 [Pipeline] sh 00:01:20.281 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest_2/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:20.559 [Pipeline] sh 00:01:20.853 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=raid-vg-autotest ./autoruner.sh spdk_repo 00:01:21.113 ++ readlink -f spdk_repo 00:01:21.113 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:21.113 + [[ -n /home/vagrant/spdk_repo ]] 00:01:21.113 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:21.113 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:21.113 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:21.113 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:21.113 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:21.113 + [[ raid-vg-autotest == pkgdep-* ]] 00:01:21.113 + cd /home/vagrant/spdk_repo 00:01:21.113 + source /etc/os-release 00:01:21.113 ++ NAME='Fedora Linux' 00:01:21.113 ++ VERSION='39 (Cloud Edition)' 00:01:21.113 ++ ID=fedora 00:01:21.113 ++ VERSION_ID=39 00:01:21.113 ++ VERSION_CODENAME= 00:01:21.113 ++ PLATFORM_ID=platform:f39 00:01:21.113 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:21.113 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:21.113 ++ LOGO=fedora-logo-icon 00:01:21.113 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:21.113 ++ HOME_URL=https://fedoraproject.org/ 00:01:21.113 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:21.113 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:21.113 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:21.113 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:21.113 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:21.113 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:21.113 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:21.113 ++ SUPPORT_END=2024-11-12 00:01:21.113 ++ VARIANT='Cloud Edition' 00:01:21.113 ++ VARIANT_ID=cloud 00:01:21.113 + uname -a 00:01:21.113 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:21.113 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:21.682 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:01:21.682 Hugepages 00:01:21.682 node hugesize free / total 00:01:21.682 node0 1048576kB 0 / 0 00:01:21.682 node0 2048kB 0 / 0 00:01:21.682 00:01:21.682 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:21.682 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:21.682 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:21.682 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:01:21.682 + rm -f /tmp/spdk-ld-path 00:01:21.682 + source autorun-spdk.conf 00:01:21.682 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:21.682 ++ SPDK_RUN_ASAN=1 00:01:21.682 ++ SPDK_RUN_UBSAN=1 00:01:21.682 ++ SPDK_TEST_RAID=1 00:01:21.682 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:21.682 ++ RUN_NIGHTLY=0 00:01:21.682 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:21.682 + [[ -n '' ]] 00:01:21.682 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:21.682 + for M in /var/spdk/build-*-manifest.txt 00:01:21.682 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:21.682 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:21.682 + for M in /var/spdk/build-*-manifest.txt 00:01:21.682 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:21.682 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:21.942 + for M in /var/spdk/build-*-manifest.txt 00:01:21.942 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:21.942 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:21.942 ++ uname 00:01:21.942 + [[ Linux == \L\i\n\u\x ]] 00:01:21.942 + sudo dmesg -T 00:01:21.942 + sudo dmesg --clear 00:01:21.942 + dmesg_pid=5428 00:01:21.942 + [[ Fedora Linux == FreeBSD ]] 00:01:21.942 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:21.942 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:21.942 + sudo dmesg -Tw 00:01:21.942 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:21.942 + [[ -x /usr/src/fio-static/fio ]] 00:01:21.942 + export FIO_BIN=/usr/src/fio-static/fio 00:01:21.942 + FIO_BIN=/usr/src/fio-static/fio 00:01:21.942 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:21.942 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:21.942 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:21.942 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:21.942 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:21.942 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:21.942 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:21.942 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:21.942 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:21.942 13:11:11 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:01:21.942 13:11:11 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:21.942 13:11:11 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:21.942 13:11:11 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_RUN_ASAN=1 00:01:21.942 13:11:11 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_RUN_UBSAN=1 00:01:21.942 13:11:11 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_RAID=1 00:01:21.942 13:11:11 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:21.942 13:11:11 -- spdk_repo/autorun-spdk.conf@6 -- $ RUN_NIGHTLY=0 00:01:21.942 13:11:11 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:21.942 13:11:11 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:22.201 13:11:11 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:01:22.201 13:11:11 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:22.201 13:11:11 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:22.201 13:11:11 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:22.201 13:11:11 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:22.201 13:11:11 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:22.201 13:11:11 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:22.201 13:11:11 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:22.201 13:11:11 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:22.201 13:11:11 -- paths/export.sh@5 -- $ export PATH 00:01:22.201 13:11:11 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:22.201 13:11:11 -- common/autobuild_common.sh@485 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:22.201 13:11:11 -- common/autobuild_common.sh@486 -- $ date +%s 00:01:22.201 13:11:11 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1731849071.XXXXXX 00:01:22.201 13:11:11 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1731849071.8BLVAm 00:01:22.201 13:11:11 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:01:22.202 13:11:11 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:01:22.202 13:11:11 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:01:22.202 13:11:11 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:22.202 13:11:11 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:22.202 13:11:11 -- common/autobuild_common.sh@502 -- $ get_config_params 00:01:22.202 13:11:11 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:01:22.202 13:11:11 -- common/autotest_common.sh@10 -- $ set +x 00:01:22.202 13:11:11 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f' 00:01:22.202 13:11:11 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:01:22.202 13:11:11 -- pm/common@17 -- $ local monitor 00:01:22.202 13:11:11 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:22.202 13:11:11 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:22.202 13:11:11 -- pm/common@25 -- $ sleep 1 00:01:22.202 13:11:11 -- pm/common@21 -- $ date +%s 00:01:22.202 13:11:11 -- pm/common@21 -- $ date +%s 00:01:22.202 13:11:11 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1731849071 00:01:22.202 13:11:11 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1731849071 00:01:22.202 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1731849071_collect-vmstat.pm.log 00:01:22.202 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1731849071_collect-cpu-load.pm.log 00:01:23.141 13:11:12 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:01:23.141 13:11:12 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:23.141 13:11:12 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:23.141 13:11:12 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:23.141 13:11:12 -- spdk/autobuild.sh@16 -- $ date -u 00:01:23.141 Sun Nov 17 01:11:12 PM UTC 2024 00:01:23.141 13:11:12 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:23.141 v25.01-pre-190-gca87521f7 00:01:23.141 13:11:12 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:01:23.141 13:11:12 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:01:23.141 13:11:12 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:23.141 13:11:12 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:23.141 13:11:12 -- common/autotest_common.sh@10 -- $ set +x 00:01:23.141 ************************************ 00:01:23.141 START TEST asan 00:01:23.141 ************************************ 00:01:23.141 using asan 00:01:23.141 13:11:12 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:01:23.141 00:01:23.141 real 0m0.001s 00:01:23.141 user 0m0.000s 00:01:23.141 sys 0m0.000s 00:01:23.141 13:11:12 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:23.141 13:11:12 asan -- common/autotest_common.sh@10 -- $ set +x 00:01:23.141 ************************************ 00:01:23.141 END TEST asan 00:01:23.141 ************************************ 00:01:23.141 13:11:12 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:23.141 13:11:12 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:23.141 13:11:12 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:23.141 13:11:12 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:23.141 13:11:12 -- common/autotest_common.sh@10 -- $ set +x 00:01:23.141 ************************************ 00:01:23.141 START TEST ubsan 00:01:23.141 ************************************ 00:01:23.141 using ubsan 00:01:23.142 13:11:12 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:01:23.142 00:01:23.142 real 0m0.000s 00:01:23.142 user 0m0.000s 00:01:23.142 sys 0m0.000s 00:01:23.142 13:11:12 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:23.142 13:11:12 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:23.142 ************************************ 00:01:23.142 END TEST ubsan 00:01:23.142 ************************************ 00:01:23.401 13:11:12 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:23.401 13:11:12 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:23.401 13:11:12 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:23.401 13:11:12 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:23.401 13:11:12 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:23.401 13:11:12 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:23.401 13:11:12 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:23.401 13:11:12 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:23.401 13:11:12 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-shared 00:01:23.401 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:01:23.401 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:23.970 Using 'verbs' RDMA provider 00:01:39.883 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:01:54.776 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:01:55.345 Creating mk/config.mk...done. 00:01:55.346 Creating mk/cc.flags.mk...done. 00:01:55.346 Type 'make' to build. 00:01:55.346 13:11:44 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:01:55.346 13:11:44 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:55.346 13:11:44 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:55.346 13:11:44 -- common/autotest_common.sh@10 -- $ set +x 00:01:55.346 ************************************ 00:01:55.346 START TEST make 00:01:55.346 ************************************ 00:01:55.346 13:11:44 make -- common/autotest_common.sh@1129 -- $ make -j10 00:01:55.914 make[1]: Nothing to be done for 'all'. 00:02:05.998 The Meson build system 00:02:05.998 Version: 1.5.0 00:02:05.998 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:05.998 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:05.998 Build type: native build 00:02:05.998 Program cat found: YES (/usr/bin/cat) 00:02:05.998 Project name: DPDK 00:02:05.998 Project version: 24.03.0 00:02:05.998 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:05.998 C linker for the host machine: cc ld.bfd 2.40-14 00:02:05.998 Host machine cpu family: x86_64 00:02:05.998 Host machine cpu: x86_64 00:02:05.998 Message: ## Building in Developer Mode ## 00:02:05.998 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:05.998 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:05.998 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:05.998 Program python3 found: YES (/usr/bin/python3) 00:02:05.998 Program cat found: YES (/usr/bin/cat) 00:02:05.998 Compiler for C supports arguments -march=native: YES 00:02:05.998 Checking for size of "void *" : 8 00:02:05.998 Checking for size of "void *" : 8 (cached) 00:02:05.998 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:05.998 Library m found: YES 00:02:05.998 Library numa found: YES 00:02:05.998 Has header "numaif.h" : YES 00:02:05.998 Library fdt found: NO 00:02:05.998 Library execinfo found: NO 00:02:05.998 Has header "execinfo.h" : YES 00:02:05.998 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:05.998 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:05.998 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:05.998 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:05.998 Run-time dependency openssl found: YES 3.1.1 00:02:05.998 Run-time dependency libpcap found: YES 1.10.4 00:02:05.998 Has header "pcap.h" with dependency libpcap: YES 00:02:05.998 Compiler for C supports arguments -Wcast-qual: YES 00:02:05.998 Compiler for C supports arguments -Wdeprecated: YES 00:02:05.998 Compiler for C supports arguments -Wformat: YES 00:02:05.998 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:05.998 Compiler for C supports arguments -Wformat-security: NO 00:02:05.998 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:05.998 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:05.998 Compiler for C supports arguments -Wnested-externs: YES 00:02:05.998 Compiler for C supports arguments -Wold-style-definition: YES 00:02:05.998 Compiler for C supports arguments -Wpointer-arith: YES 00:02:05.998 Compiler for C supports arguments -Wsign-compare: YES 00:02:05.998 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:05.998 Compiler for C supports arguments -Wundef: YES 00:02:05.998 Compiler for C supports arguments -Wwrite-strings: YES 00:02:05.998 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:05.998 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:05.998 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:05.998 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:05.998 Program objdump found: YES (/usr/bin/objdump) 00:02:05.998 Compiler for C supports arguments -mavx512f: YES 00:02:05.998 Checking if "AVX512 checking" compiles: YES 00:02:05.998 Fetching value of define "__SSE4_2__" : 1 00:02:05.998 Fetching value of define "__AES__" : 1 00:02:05.998 Fetching value of define "__AVX__" : 1 00:02:05.998 Fetching value of define "__AVX2__" : 1 00:02:05.998 Fetching value of define "__AVX512BW__" : 1 00:02:05.998 Fetching value of define "__AVX512CD__" : 1 00:02:05.998 Fetching value of define "__AVX512DQ__" : 1 00:02:05.998 Fetching value of define "__AVX512F__" : 1 00:02:05.998 Fetching value of define "__AVX512VL__" : 1 00:02:05.998 Fetching value of define "__PCLMUL__" : 1 00:02:05.998 Fetching value of define "__RDRND__" : 1 00:02:05.998 Fetching value of define "__RDSEED__" : 1 00:02:05.998 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:05.998 Fetching value of define "__znver1__" : (undefined) 00:02:05.998 Fetching value of define "__znver2__" : (undefined) 00:02:05.998 Fetching value of define "__znver3__" : (undefined) 00:02:05.998 Fetching value of define "__znver4__" : (undefined) 00:02:05.998 Library asan found: YES 00:02:05.998 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:05.998 Message: lib/log: Defining dependency "log" 00:02:05.998 Message: lib/kvargs: Defining dependency "kvargs" 00:02:05.998 Message: lib/telemetry: Defining dependency "telemetry" 00:02:05.998 Library rt found: YES 00:02:05.998 Checking for function "getentropy" : NO 00:02:05.998 Message: lib/eal: Defining dependency "eal" 00:02:05.998 Message: lib/ring: Defining dependency "ring" 00:02:05.998 Message: lib/rcu: Defining dependency "rcu" 00:02:05.998 Message: lib/mempool: Defining dependency "mempool" 00:02:05.998 Message: lib/mbuf: Defining dependency "mbuf" 00:02:05.998 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:05.998 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:05.998 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:05.998 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:05.998 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:05.998 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:05.998 Compiler for C supports arguments -mpclmul: YES 00:02:05.998 Compiler for C supports arguments -maes: YES 00:02:05.998 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:05.998 Compiler for C supports arguments -mavx512bw: YES 00:02:05.998 Compiler for C supports arguments -mavx512dq: YES 00:02:05.998 Compiler for C supports arguments -mavx512vl: YES 00:02:05.998 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:05.998 Compiler for C supports arguments -mavx2: YES 00:02:05.998 Compiler for C supports arguments -mavx: YES 00:02:05.998 Message: lib/net: Defining dependency "net" 00:02:05.998 Message: lib/meter: Defining dependency "meter" 00:02:05.998 Message: lib/ethdev: Defining dependency "ethdev" 00:02:05.998 Message: lib/pci: Defining dependency "pci" 00:02:05.998 Message: lib/cmdline: Defining dependency "cmdline" 00:02:05.998 Message: lib/hash: Defining dependency "hash" 00:02:05.998 Message: lib/timer: Defining dependency "timer" 00:02:05.998 Message: lib/compressdev: Defining dependency "compressdev" 00:02:05.998 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:05.998 Message: lib/dmadev: Defining dependency "dmadev" 00:02:05.998 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:05.998 Message: lib/power: Defining dependency "power" 00:02:05.998 Message: lib/reorder: Defining dependency "reorder" 00:02:05.998 Message: lib/security: Defining dependency "security" 00:02:05.998 Has header "linux/userfaultfd.h" : YES 00:02:05.998 Has header "linux/vduse.h" : YES 00:02:05.998 Message: lib/vhost: Defining dependency "vhost" 00:02:05.998 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:05.998 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:05.998 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:05.998 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:05.998 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:05.998 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:05.998 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:05.998 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:05.998 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:05.998 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:05.998 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:05.998 Configuring doxy-api-html.conf using configuration 00:02:05.998 Configuring doxy-api-man.conf using configuration 00:02:05.998 Program mandb found: YES (/usr/bin/mandb) 00:02:05.998 Program sphinx-build found: NO 00:02:05.998 Configuring rte_build_config.h using configuration 00:02:05.998 Message: 00:02:05.998 ================= 00:02:05.998 Applications Enabled 00:02:05.998 ================= 00:02:05.998 00:02:05.998 apps: 00:02:05.998 00:02:05.998 00:02:05.998 Message: 00:02:05.998 ================= 00:02:05.998 Libraries Enabled 00:02:05.998 ================= 00:02:05.998 00:02:05.998 libs: 00:02:05.998 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:05.998 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:05.998 cryptodev, dmadev, power, reorder, security, vhost, 00:02:05.998 00:02:05.998 Message: 00:02:05.998 =============== 00:02:05.998 Drivers Enabled 00:02:05.998 =============== 00:02:05.998 00:02:05.998 common: 00:02:05.998 00:02:05.998 bus: 00:02:05.998 pci, vdev, 00:02:05.998 mempool: 00:02:05.998 ring, 00:02:05.998 dma: 00:02:05.998 00:02:05.998 net: 00:02:05.998 00:02:05.998 crypto: 00:02:05.998 00:02:05.998 compress: 00:02:05.998 00:02:05.998 vdpa: 00:02:05.998 00:02:05.998 00:02:05.998 Message: 00:02:05.998 ================= 00:02:05.998 Content Skipped 00:02:05.998 ================= 00:02:05.998 00:02:05.998 apps: 00:02:05.998 dumpcap: explicitly disabled via build config 00:02:05.998 graph: explicitly disabled via build config 00:02:05.998 pdump: explicitly disabled via build config 00:02:05.998 proc-info: explicitly disabled via build config 00:02:05.998 test-acl: explicitly disabled via build config 00:02:05.998 test-bbdev: explicitly disabled via build config 00:02:05.998 test-cmdline: explicitly disabled via build config 00:02:05.998 test-compress-perf: explicitly disabled via build config 00:02:05.998 test-crypto-perf: explicitly disabled via build config 00:02:05.998 test-dma-perf: explicitly disabled via build config 00:02:05.998 test-eventdev: explicitly disabled via build config 00:02:05.998 test-fib: explicitly disabled via build config 00:02:05.999 test-flow-perf: explicitly disabled via build config 00:02:05.999 test-gpudev: explicitly disabled via build config 00:02:05.999 test-mldev: explicitly disabled via build config 00:02:05.999 test-pipeline: explicitly disabled via build config 00:02:05.999 test-pmd: explicitly disabled via build config 00:02:05.999 test-regex: explicitly disabled via build config 00:02:05.999 test-sad: explicitly disabled via build config 00:02:05.999 test-security-perf: explicitly disabled via build config 00:02:05.999 00:02:05.999 libs: 00:02:05.999 argparse: explicitly disabled via build config 00:02:05.999 metrics: explicitly disabled via build config 00:02:05.999 acl: explicitly disabled via build config 00:02:05.999 bbdev: explicitly disabled via build config 00:02:05.999 bitratestats: explicitly disabled via build config 00:02:05.999 bpf: explicitly disabled via build config 00:02:05.999 cfgfile: explicitly disabled via build config 00:02:05.999 distributor: explicitly disabled via build config 00:02:05.999 efd: explicitly disabled via build config 00:02:05.999 eventdev: explicitly disabled via build config 00:02:05.999 dispatcher: explicitly disabled via build config 00:02:05.999 gpudev: explicitly disabled via build config 00:02:05.999 gro: explicitly disabled via build config 00:02:05.999 gso: explicitly disabled via build config 00:02:05.999 ip_frag: explicitly disabled via build config 00:02:05.999 jobstats: explicitly disabled via build config 00:02:05.999 latencystats: explicitly disabled via build config 00:02:05.999 lpm: explicitly disabled via build config 00:02:05.999 member: explicitly disabled via build config 00:02:05.999 pcapng: explicitly disabled via build config 00:02:05.999 rawdev: explicitly disabled via build config 00:02:05.999 regexdev: explicitly disabled via build config 00:02:05.999 mldev: explicitly disabled via build config 00:02:05.999 rib: explicitly disabled via build config 00:02:05.999 sched: explicitly disabled via build config 00:02:05.999 stack: explicitly disabled via build config 00:02:05.999 ipsec: explicitly disabled via build config 00:02:05.999 pdcp: explicitly disabled via build config 00:02:05.999 fib: explicitly disabled via build config 00:02:05.999 port: explicitly disabled via build config 00:02:05.999 pdump: explicitly disabled via build config 00:02:05.999 table: explicitly disabled via build config 00:02:05.999 pipeline: explicitly disabled via build config 00:02:05.999 graph: explicitly disabled via build config 00:02:05.999 node: explicitly disabled via build config 00:02:05.999 00:02:05.999 drivers: 00:02:05.999 common/cpt: not in enabled drivers build config 00:02:05.999 common/dpaax: not in enabled drivers build config 00:02:05.999 common/iavf: not in enabled drivers build config 00:02:05.999 common/idpf: not in enabled drivers build config 00:02:05.999 common/ionic: not in enabled drivers build config 00:02:05.999 common/mvep: not in enabled drivers build config 00:02:05.999 common/octeontx: not in enabled drivers build config 00:02:05.999 bus/auxiliary: not in enabled drivers build config 00:02:05.999 bus/cdx: not in enabled drivers build config 00:02:05.999 bus/dpaa: not in enabled drivers build config 00:02:05.999 bus/fslmc: not in enabled drivers build config 00:02:05.999 bus/ifpga: not in enabled drivers build config 00:02:05.999 bus/platform: not in enabled drivers build config 00:02:05.999 bus/uacce: not in enabled drivers build config 00:02:05.999 bus/vmbus: not in enabled drivers build config 00:02:05.999 common/cnxk: not in enabled drivers build config 00:02:05.999 common/mlx5: not in enabled drivers build config 00:02:05.999 common/nfp: not in enabled drivers build config 00:02:05.999 common/nitrox: not in enabled drivers build config 00:02:05.999 common/qat: not in enabled drivers build config 00:02:05.999 common/sfc_efx: not in enabled drivers build config 00:02:05.999 mempool/bucket: not in enabled drivers build config 00:02:05.999 mempool/cnxk: not in enabled drivers build config 00:02:05.999 mempool/dpaa: not in enabled drivers build config 00:02:05.999 mempool/dpaa2: not in enabled drivers build config 00:02:05.999 mempool/octeontx: not in enabled drivers build config 00:02:05.999 mempool/stack: not in enabled drivers build config 00:02:05.999 dma/cnxk: not in enabled drivers build config 00:02:05.999 dma/dpaa: not in enabled drivers build config 00:02:05.999 dma/dpaa2: not in enabled drivers build config 00:02:05.999 dma/hisilicon: not in enabled drivers build config 00:02:05.999 dma/idxd: not in enabled drivers build config 00:02:05.999 dma/ioat: not in enabled drivers build config 00:02:05.999 dma/skeleton: not in enabled drivers build config 00:02:05.999 net/af_packet: not in enabled drivers build config 00:02:05.999 net/af_xdp: not in enabled drivers build config 00:02:05.999 net/ark: not in enabled drivers build config 00:02:05.999 net/atlantic: not in enabled drivers build config 00:02:05.999 net/avp: not in enabled drivers build config 00:02:05.999 net/axgbe: not in enabled drivers build config 00:02:05.999 net/bnx2x: not in enabled drivers build config 00:02:05.999 net/bnxt: not in enabled drivers build config 00:02:05.999 net/bonding: not in enabled drivers build config 00:02:05.999 net/cnxk: not in enabled drivers build config 00:02:05.999 net/cpfl: not in enabled drivers build config 00:02:05.999 net/cxgbe: not in enabled drivers build config 00:02:05.999 net/dpaa: not in enabled drivers build config 00:02:05.999 net/dpaa2: not in enabled drivers build config 00:02:05.999 net/e1000: not in enabled drivers build config 00:02:05.999 net/ena: not in enabled drivers build config 00:02:05.999 net/enetc: not in enabled drivers build config 00:02:05.999 net/enetfec: not in enabled drivers build config 00:02:05.999 net/enic: not in enabled drivers build config 00:02:05.999 net/failsafe: not in enabled drivers build config 00:02:05.999 net/fm10k: not in enabled drivers build config 00:02:05.999 net/gve: not in enabled drivers build config 00:02:05.999 net/hinic: not in enabled drivers build config 00:02:05.999 net/hns3: not in enabled drivers build config 00:02:05.999 net/i40e: not in enabled drivers build config 00:02:05.999 net/iavf: not in enabled drivers build config 00:02:05.999 net/ice: not in enabled drivers build config 00:02:05.999 net/idpf: not in enabled drivers build config 00:02:05.999 net/igc: not in enabled drivers build config 00:02:05.999 net/ionic: not in enabled drivers build config 00:02:05.999 net/ipn3ke: not in enabled drivers build config 00:02:05.999 net/ixgbe: not in enabled drivers build config 00:02:05.999 net/mana: not in enabled drivers build config 00:02:05.999 net/memif: not in enabled drivers build config 00:02:05.999 net/mlx4: not in enabled drivers build config 00:02:05.999 net/mlx5: not in enabled drivers build config 00:02:05.999 net/mvneta: not in enabled drivers build config 00:02:05.999 net/mvpp2: not in enabled drivers build config 00:02:05.999 net/netvsc: not in enabled drivers build config 00:02:05.999 net/nfb: not in enabled drivers build config 00:02:05.999 net/nfp: not in enabled drivers build config 00:02:05.999 net/ngbe: not in enabled drivers build config 00:02:05.999 net/null: not in enabled drivers build config 00:02:05.999 net/octeontx: not in enabled drivers build config 00:02:05.999 net/octeon_ep: not in enabled drivers build config 00:02:05.999 net/pcap: not in enabled drivers build config 00:02:05.999 net/pfe: not in enabled drivers build config 00:02:05.999 net/qede: not in enabled drivers build config 00:02:05.999 net/ring: not in enabled drivers build config 00:02:05.999 net/sfc: not in enabled drivers build config 00:02:05.999 net/softnic: not in enabled drivers build config 00:02:05.999 net/tap: not in enabled drivers build config 00:02:05.999 net/thunderx: not in enabled drivers build config 00:02:05.999 net/txgbe: not in enabled drivers build config 00:02:05.999 net/vdev_netvsc: not in enabled drivers build config 00:02:05.999 net/vhost: not in enabled drivers build config 00:02:05.999 net/virtio: not in enabled drivers build config 00:02:05.999 net/vmxnet3: not in enabled drivers build config 00:02:05.999 raw/*: missing internal dependency, "rawdev" 00:02:05.999 crypto/armv8: not in enabled drivers build config 00:02:05.999 crypto/bcmfs: not in enabled drivers build config 00:02:05.999 crypto/caam_jr: not in enabled drivers build config 00:02:05.999 crypto/ccp: not in enabled drivers build config 00:02:05.999 crypto/cnxk: not in enabled drivers build config 00:02:05.999 crypto/dpaa_sec: not in enabled drivers build config 00:02:05.999 crypto/dpaa2_sec: not in enabled drivers build config 00:02:05.999 crypto/ipsec_mb: not in enabled drivers build config 00:02:05.999 crypto/mlx5: not in enabled drivers build config 00:02:05.999 crypto/mvsam: not in enabled drivers build config 00:02:05.999 crypto/nitrox: not in enabled drivers build config 00:02:05.999 crypto/null: not in enabled drivers build config 00:02:05.999 crypto/octeontx: not in enabled drivers build config 00:02:05.999 crypto/openssl: not in enabled drivers build config 00:02:05.999 crypto/scheduler: not in enabled drivers build config 00:02:05.999 crypto/uadk: not in enabled drivers build config 00:02:05.999 crypto/virtio: not in enabled drivers build config 00:02:05.999 compress/isal: not in enabled drivers build config 00:02:05.999 compress/mlx5: not in enabled drivers build config 00:02:05.999 compress/nitrox: not in enabled drivers build config 00:02:05.999 compress/octeontx: not in enabled drivers build config 00:02:05.999 compress/zlib: not in enabled drivers build config 00:02:05.999 regex/*: missing internal dependency, "regexdev" 00:02:05.999 ml/*: missing internal dependency, "mldev" 00:02:05.999 vdpa/ifc: not in enabled drivers build config 00:02:05.999 vdpa/mlx5: not in enabled drivers build config 00:02:05.999 vdpa/nfp: not in enabled drivers build config 00:02:05.999 vdpa/sfc: not in enabled drivers build config 00:02:05.999 event/*: missing internal dependency, "eventdev" 00:02:05.999 baseband/*: missing internal dependency, "bbdev" 00:02:05.999 gpu/*: missing internal dependency, "gpudev" 00:02:05.999 00:02:05.999 00:02:06.258 Build targets in project: 85 00:02:06.258 00:02:06.258 DPDK 24.03.0 00:02:06.258 00:02:06.258 User defined options 00:02:06.258 buildtype : debug 00:02:06.258 default_library : shared 00:02:06.258 libdir : lib 00:02:06.258 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:06.258 b_sanitize : address 00:02:06.258 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:06.258 c_link_args : 00:02:06.258 cpu_instruction_set: native 00:02:06.258 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:06.258 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:06.258 enable_docs : false 00:02:06.258 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:06.258 enable_kmods : false 00:02:06.258 max_lcores : 128 00:02:06.258 tests : false 00:02:06.258 00:02:06.258 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:06.825 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:06.825 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:06.825 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:06.825 [3/268] Linking static target lib/librte_kvargs.a 00:02:07.084 [4/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:07.084 [5/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:07.084 [6/268] Linking static target lib/librte_log.a 00:02:07.342 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:07.342 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:07.342 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:07.342 [10/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.342 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:07.342 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:07.342 [13/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:07.601 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:07.601 [15/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:07.601 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:07.601 [17/268] Linking static target lib/librte_telemetry.a 00:02:07.601 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:07.859 [19/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.860 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:07.860 [21/268] Linking target lib/librte_log.so.24.1 00:02:08.117 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:08.117 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:08.117 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:08.117 [25/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:08.117 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:08.117 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:08.117 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:08.117 [29/268] Linking target lib/librte_kvargs.so.24.1 00:02:08.375 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:08.375 [31/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.375 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:08.375 [33/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:08.375 [34/268] Linking target lib/librte_telemetry.so.24.1 00:02:08.660 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:08.660 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:08.660 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:08.660 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:08.660 [39/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:08.660 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:08.660 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:08.660 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:08.660 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:08.955 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:08.955 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:08.955 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:09.215 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:09.215 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:09.215 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:09.215 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:09.215 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:09.474 [52/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:09.474 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:09.474 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:09.474 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:09.474 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:09.475 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:09.734 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:09.734 [59/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:09.734 [60/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:09.734 [61/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:09.734 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:09.734 [63/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:09.993 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:09.993 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:09.993 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:09.993 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:10.251 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:10.251 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:10.509 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:10.509 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:10.509 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:10.509 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:10.509 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:10.509 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:10.509 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:10.769 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:10.769 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:10.769 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:10.769 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:11.028 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:11.028 [82/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:11.287 [83/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:11.287 [84/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:11.287 [85/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:11.287 [86/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:11.287 [87/268] Linking static target lib/librte_eal.a 00:02:11.287 [88/268] Linking static target lib/librte_ring.a 00:02:11.287 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:11.287 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:11.546 [91/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:11.546 [92/268] Linking static target lib/librte_mempool.a 00:02:11.546 [93/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:11.546 [94/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:11.546 [95/268] Linking static target lib/librte_rcu.a 00:02:11.546 [96/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:11.806 [97/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:11.806 [98/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.066 [99/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:12.066 [100/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.066 [101/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:12.066 [102/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:12.066 [103/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:12.066 [104/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:12.066 [105/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:12.066 [106/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:12.066 [107/268] Linking static target lib/librte_mbuf.a 00:02:12.324 [108/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:12.324 [109/268] Linking static target lib/librte_meter.a 00:02:12.582 [110/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:12.582 [111/268] Linking static target lib/librte_net.a 00:02:12.582 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:12.582 [113/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.582 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:12.582 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:12.582 [116/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.841 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:12.841 [118/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.101 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:13.101 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:13.101 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:13.101 [122/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.361 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:13.620 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:13.620 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:13.620 [126/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:13.620 [127/268] Linking static target lib/librte_pci.a 00:02:13.620 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:13.620 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:13.879 [130/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:13.879 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:13.879 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:13.879 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:13.879 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:13.879 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:13.879 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:13.879 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:13.879 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:14.138 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:14.138 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:14.138 [141/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.138 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:14.138 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:14.138 [144/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:14.138 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:14.396 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:14.396 [147/268] Linking static target lib/librte_cmdline.a 00:02:14.396 [148/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:14.396 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:14.653 [150/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:14.653 [151/268] Linking static target lib/librte_timer.a 00:02:14.653 [152/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:14.911 [153/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:14.911 [154/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:14.911 [155/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:14.911 [156/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:14.911 [157/268] Linking static target lib/librte_compressdev.a 00:02:15.170 [158/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:15.170 [159/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.170 [160/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:15.449 [161/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:15.449 [162/268] Linking static target lib/librte_ethdev.a 00:02:15.449 [163/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:15.449 [164/268] Linking static target lib/librte_hash.a 00:02:15.449 [165/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:15.449 [166/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:15.707 [167/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:15.707 [168/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:15.707 [169/268] Linking static target lib/librte_dmadev.a 00:02:15.707 [170/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.707 [171/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.966 [172/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:15.966 [173/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:15.966 [174/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:16.225 [175/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:16.225 [176/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:16.225 [177/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:16.225 [178/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:16.225 [179/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:16.484 [180/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:16.484 [181/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.484 [182/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.484 [183/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:16.484 [184/268] Linking static target lib/librte_cryptodev.a 00:02:16.484 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:16.743 [186/268] Linking static target lib/librte_power.a 00:02:17.002 [187/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:17.002 [188/268] Linking static target lib/librte_reorder.a 00:02:17.002 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:17.002 [190/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:17.002 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:17.002 [192/268] Linking static target lib/librte_security.a 00:02:17.002 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:17.570 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.570 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:17.828 [196/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.828 [197/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.828 [198/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:17.828 [199/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:18.087 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:18.087 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:18.346 [202/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:18.346 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:18.346 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:18.605 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:18.605 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:18.605 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:18.605 [208/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:18.605 [209/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:18.605 [210/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:18.864 [211/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.864 [212/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:18.864 [213/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:18.864 [214/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:18.864 [215/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:18.864 [216/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:18.864 [217/268] Linking static target drivers/librte_bus_vdev.a 00:02:18.864 [218/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:18.864 [219/268] Linking static target drivers/librte_bus_pci.a 00:02:19.123 [220/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:19.123 [221/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:19.123 [222/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.394 [223/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:19.394 [224/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:19.394 [225/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:19.394 [226/268] Linking static target drivers/librte_mempool_ring.a 00:02:19.394 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.792 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:21.732 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.732 [230/268] Linking target lib/librte_eal.so.24.1 00:02:21.732 [231/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:21.991 [232/268] Linking target lib/librte_ring.so.24.1 00:02:21.991 [233/268] Linking target lib/librte_meter.so.24.1 00:02:21.991 [234/268] Linking target lib/librte_timer.so.24.1 00:02:21.991 [235/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:21.991 [236/268] Linking target lib/librte_pci.so.24.1 00:02:21.991 [237/268] Linking target lib/librte_dmadev.so.24.1 00:02:21.991 [238/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:21.991 [239/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:21.991 [240/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:21.991 [241/268] Linking target lib/librte_rcu.so.24.1 00:02:21.991 [242/268] Linking target lib/librte_mempool.so.24.1 00:02:21.991 [243/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:21.991 [244/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:21.991 [245/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:22.251 [246/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:22.251 [247/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:22.251 [248/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:22.251 [249/268] Linking target lib/librte_mbuf.so.24.1 00:02:22.251 [250/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:22.511 [251/268] Linking target lib/librte_reorder.so.24.1 00:02:22.511 [252/268] Linking target lib/librte_compressdev.so.24.1 00:02:22.511 [253/268] Linking target lib/librte_net.so.24.1 00:02:22.511 [254/268] Linking target lib/librte_cryptodev.so.24.1 00:02:22.511 [255/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:22.511 [256/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:22.511 [257/268] Linking target lib/librte_cmdline.so.24.1 00:02:22.511 [258/268] Linking target lib/librte_hash.so.24.1 00:02:22.770 [259/268] Linking target lib/librte_security.so.24.1 00:02:22.770 [260/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:23.708 [261/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.708 [262/268] Linking target lib/librte_ethdev.so.24.1 00:02:23.968 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:23.968 [264/268] Linking target lib/librte_power.so.24.1 00:02:23.968 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:24.228 [266/268] Linking static target lib/librte_vhost.a 00:02:26.772 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.772 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:26.772 INFO: autodetecting backend as ninja 00:02:26.772 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:02:48.740 CC lib/log/log.o 00:02:48.740 CC lib/log/log_deprecated.o 00:02:48.740 CC lib/log/log_flags.o 00:02:48.740 CC lib/ut_mock/mock.o 00:02:48.740 CC lib/ut/ut.o 00:02:48.740 LIB libspdk_ut_mock.a 00:02:48.740 LIB libspdk_log.a 00:02:48.740 SO libspdk_ut_mock.so.6.0 00:02:48.740 LIB libspdk_ut.a 00:02:48.741 SO libspdk_log.so.7.1 00:02:48.741 SO libspdk_ut.so.2.0 00:02:48.741 SYMLINK libspdk_ut_mock.so 00:02:48.741 SYMLINK libspdk_log.so 00:02:48.741 SYMLINK libspdk_ut.so 00:02:48.741 CXX lib/trace_parser/trace.o 00:02:48.741 CC lib/dma/dma.o 00:02:48.741 CC lib/ioat/ioat.o 00:02:48.741 CC lib/util/bit_array.o 00:02:48.741 CC lib/util/base64.o 00:02:48.741 CC lib/util/cpuset.o 00:02:48.741 CC lib/util/crc16.o 00:02:48.741 CC lib/util/crc32.o 00:02:48.741 CC lib/util/crc32c.o 00:02:48.741 CC lib/vfio_user/host/vfio_user_pci.o 00:02:48.741 CC lib/util/crc32_ieee.o 00:02:48.741 CC lib/util/crc64.o 00:02:48.741 LIB libspdk_dma.a 00:02:48.741 CC lib/util/dif.o 00:02:48.741 SO libspdk_dma.so.5.0 00:02:48.741 CC lib/vfio_user/host/vfio_user.o 00:02:48.741 CC lib/util/fd.o 00:02:48.741 SYMLINK libspdk_dma.so 00:02:48.741 CC lib/util/fd_group.o 00:02:48.741 CC lib/util/file.o 00:02:48.741 CC lib/util/hexlify.o 00:02:48.741 LIB libspdk_ioat.a 00:02:48.741 SO libspdk_ioat.so.7.0 00:02:48.741 CC lib/util/iov.o 00:02:48.741 SYMLINK libspdk_ioat.so 00:02:48.741 CC lib/util/math.o 00:02:48.741 CC lib/util/net.o 00:02:48.741 CC lib/util/pipe.o 00:02:48.741 CC lib/util/strerror_tls.o 00:02:48.741 CC lib/util/string.o 00:02:48.741 LIB libspdk_vfio_user.a 00:02:48.741 SO libspdk_vfio_user.so.5.0 00:02:48.741 CC lib/util/uuid.o 00:02:48.741 SYMLINK libspdk_vfio_user.so 00:02:48.741 CC lib/util/xor.o 00:02:48.741 CC lib/util/zipf.o 00:02:48.741 CC lib/util/md5.o 00:02:48.741 LIB libspdk_util.a 00:02:48.741 SO libspdk_util.so.10.1 00:02:48.741 LIB libspdk_trace_parser.a 00:02:48.741 SO libspdk_trace_parser.so.6.0 00:02:48.741 SYMLINK libspdk_util.so 00:02:48.741 SYMLINK libspdk_trace_parser.so 00:02:48.741 CC lib/json/json_parse.o 00:02:48.741 CC lib/json/json_util.o 00:02:48.741 CC lib/json/json_write.o 00:02:48.741 CC lib/conf/conf.o 00:02:48.741 CC lib/idxd/idxd.o 00:02:48.741 CC lib/rdma_utils/rdma_utils.o 00:02:48.741 CC lib/idxd/idxd_user.o 00:02:48.741 CC lib/idxd/idxd_kernel.o 00:02:48.741 CC lib/vmd/vmd.o 00:02:48.741 CC lib/env_dpdk/env.o 00:02:48.741 CC lib/env_dpdk/memory.o 00:02:48.741 CC lib/env_dpdk/pci.o 00:02:48.741 LIB libspdk_conf.a 00:02:48.741 CC lib/vmd/led.o 00:02:48.741 CC lib/env_dpdk/init.o 00:02:48.741 SO libspdk_conf.so.6.0 00:02:48.741 LIB libspdk_json.a 00:02:48.741 LIB libspdk_rdma_utils.a 00:02:48.741 SO libspdk_json.so.6.0 00:02:48.741 SO libspdk_rdma_utils.so.1.0 00:02:48.741 SYMLINK libspdk_conf.so 00:02:48.741 CC lib/env_dpdk/threads.o 00:02:48.741 SYMLINK libspdk_json.so 00:02:48.741 SYMLINK libspdk_rdma_utils.so 00:02:48.741 CC lib/env_dpdk/pci_ioat.o 00:02:48.741 CC lib/env_dpdk/pci_virtio.o 00:02:48.741 CC lib/env_dpdk/pci_vmd.o 00:02:48.741 CC lib/jsonrpc/jsonrpc_server.o 00:02:48.741 CC lib/rdma_provider/common.o 00:02:48.741 CC lib/env_dpdk/pci_idxd.o 00:02:48.741 CC lib/env_dpdk/pci_event.o 00:02:48.741 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:48.741 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:48.741 LIB libspdk_idxd.a 00:02:48.741 CC lib/jsonrpc/jsonrpc_client.o 00:02:48.741 SO libspdk_idxd.so.12.1 00:02:48.741 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:48.741 CC lib/env_dpdk/sigbus_handler.o 00:02:48.741 LIB libspdk_vmd.a 00:02:48.741 CC lib/env_dpdk/pci_dpdk.o 00:02:48.741 SYMLINK libspdk_idxd.so 00:02:48.741 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:48.741 SO libspdk_vmd.so.6.0 00:02:48.741 LIB libspdk_rdma_provider.a 00:02:48.741 SYMLINK libspdk_vmd.so 00:02:48.741 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:48.741 SO libspdk_rdma_provider.so.7.0 00:02:48.741 SYMLINK libspdk_rdma_provider.so 00:02:48.741 LIB libspdk_jsonrpc.a 00:02:48.741 SO libspdk_jsonrpc.so.6.0 00:02:48.741 SYMLINK libspdk_jsonrpc.so 00:02:48.741 CC lib/rpc/rpc.o 00:02:48.741 LIB libspdk_env_dpdk.a 00:02:48.741 LIB libspdk_rpc.a 00:02:48.741 SO libspdk_env_dpdk.so.15.1 00:02:49.001 SO libspdk_rpc.so.6.0 00:02:49.001 SYMLINK libspdk_rpc.so 00:02:49.001 SYMLINK libspdk_env_dpdk.so 00:02:49.261 CC lib/keyring/keyring_rpc.o 00:02:49.261 CC lib/keyring/keyring.o 00:02:49.261 CC lib/notify/notify_rpc.o 00:02:49.261 CC lib/notify/notify.o 00:02:49.261 CC lib/trace/trace.o 00:02:49.261 CC lib/trace/trace_flags.o 00:02:49.261 CC lib/trace/trace_rpc.o 00:02:49.533 LIB libspdk_notify.a 00:02:49.533 SO libspdk_notify.so.6.0 00:02:49.533 SYMLINK libspdk_notify.so 00:02:49.533 LIB libspdk_keyring.a 00:02:49.533 LIB libspdk_trace.a 00:02:49.533 SO libspdk_keyring.so.2.0 00:02:49.815 SO libspdk_trace.so.11.0 00:02:49.815 SYMLINK libspdk_keyring.so 00:02:49.815 SYMLINK libspdk_trace.so 00:02:50.073 CC lib/sock/sock_rpc.o 00:02:50.073 CC lib/sock/sock.o 00:02:50.073 CC lib/thread/iobuf.o 00:02:50.073 CC lib/thread/thread.o 00:02:50.641 LIB libspdk_sock.a 00:02:50.641 SO libspdk_sock.so.10.0 00:02:50.641 SYMLINK libspdk_sock.so 00:02:51.208 CC lib/nvme/nvme_ns_cmd.o 00:02:51.208 CC lib/nvme/nvme_fabric.o 00:02:51.208 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:51.208 CC lib/nvme/nvme_ctrlr.o 00:02:51.208 CC lib/nvme/nvme_pcie.o 00:02:51.208 CC lib/nvme/nvme_ns.o 00:02:51.208 CC lib/nvme/nvme_qpair.o 00:02:51.208 CC lib/nvme/nvme_pcie_common.o 00:02:51.208 CC lib/nvme/nvme.o 00:02:51.776 CC lib/nvme/nvme_quirks.o 00:02:51.776 CC lib/nvme/nvme_transport.o 00:02:51.776 CC lib/nvme/nvme_discovery.o 00:02:51.776 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:52.034 LIB libspdk_thread.a 00:02:52.034 SO libspdk_thread.so.11.0 00:02:52.034 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:52.034 CC lib/nvme/nvme_tcp.o 00:02:52.034 SYMLINK libspdk_thread.so 00:02:52.034 CC lib/nvme/nvme_opal.o 00:02:52.034 CC lib/nvme/nvme_io_msg.o 00:02:52.293 CC lib/nvme/nvme_poll_group.o 00:02:52.293 CC lib/nvme/nvme_zns.o 00:02:52.552 CC lib/nvme/nvme_stubs.o 00:02:52.552 CC lib/nvme/nvme_auth.o 00:02:52.552 CC lib/nvme/nvme_cuse.o 00:02:52.552 CC lib/nvme/nvme_rdma.o 00:02:52.811 CC lib/accel/accel.o 00:02:52.811 CC lib/blob/blobstore.o 00:02:53.071 CC lib/accel/accel_rpc.o 00:02:53.071 CC lib/accel/accel_sw.o 00:02:53.071 CC lib/init/json_config.o 00:02:53.330 CC lib/virtio/virtio.o 00:02:53.330 CC lib/virtio/virtio_vhost_user.o 00:02:53.330 CC lib/init/subsystem.o 00:02:53.589 CC lib/virtio/virtio_vfio_user.o 00:02:53.589 CC lib/fsdev/fsdev.o 00:02:53.589 CC lib/init/subsystem_rpc.o 00:02:53.589 CC lib/virtio/virtio_pci.o 00:02:53.589 CC lib/fsdev/fsdev_io.o 00:02:53.847 CC lib/fsdev/fsdev_rpc.o 00:02:53.847 CC lib/init/rpc.o 00:02:53.847 CC lib/blob/request.o 00:02:53.847 CC lib/blob/zeroes.o 00:02:53.847 CC lib/blob/blob_bs_dev.o 00:02:53.847 LIB libspdk_init.a 00:02:54.106 LIB libspdk_virtio.a 00:02:54.106 SO libspdk_init.so.6.0 00:02:54.106 SO libspdk_virtio.so.7.0 00:02:54.106 SYMLINK libspdk_init.so 00:02:54.106 SYMLINK libspdk_virtio.so 00:02:54.106 LIB libspdk_accel.a 00:02:54.106 LIB libspdk_nvme.a 00:02:54.106 SO libspdk_accel.so.16.0 00:02:54.366 SYMLINK libspdk_accel.so 00:02:54.366 LIB libspdk_fsdev.a 00:02:54.366 SO libspdk_nvme.so.15.0 00:02:54.366 SO libspdk_fsdev.so.2.0 00:02:54.366 CC lib/event/app.o 00:02:54.366 CC lib/event/log_rpc.o 00:02:54.366 CC lib/event/reactor.o 00:02:54.366 CC lib/event/app_rpc.o 00:02:54.366 CC lib/event/scheduler_static.o 00:02:54.366 SYMLINK libspdk_fsdev.so 00:02:54.626 CC lib/bdev/bdev.o 00:02:54.626 CC lib/bdev/bdev_rpc.o 00:02:54.626 CC lib/bdev/bdev_zone.o 00:02:54.626 CC lib/bdev/part.o 00:02:54.626 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:02:54.626 SYMLINK libspdk_nvme.so 00:02:54.626 CC lib/bdev/scsi_nvme.o 00:02:54.886 LIB libspdk_event.a 00:02:55.146 SO libspdk_event.so.14.0 00:02:55.146 SYMLINK libspdk_event.so 00:02:55.405 LIB libspdk_fuse_dispatcher.a 00:02:55.405 SO libspdk_fuse_dispatcher.so.1.0 00:02:55.405 SYMLINK libspdk_fuse_dispatcher.so 00:02:56.784 LIB libspdk_blob.a 00:02:56.784 SO libspdk_blob.so.11.0 00:02:57.043 SYMLINK libspdk_blob.so 00:02:57.303 CC lib/lvol/lvol.o 00:02:57.303 CC lib/blobfs/blobfs.o 00:02:57.303 CC lib/blobfs/tree.o 00:02:57.563 LIB libspdk_bdev.a 00:02:57.563 SO libspdk_bdev.so.17.0 00:02:57.563 SYMLINK libspdk_bdev.so 00:02:57.823 CC lib/nvmf/ctrlr_discovery.o 00:02:57.823 CC lib/nvmf/ctrlr_bdev.o 00:02:57.823 CC lib/nvmf/subsystem.o 00:02:57.823 CC lib/nvmf/ctrlr.o 00:02:57.823 CC lib/nbd/nbd.o 00:02:58.083 CC lib/scsi/dev.o 00:02:58.083 CC lib/ublk/ublk.o 00:02:58.083 CC lib/ftl/ftl_core.o 00:02:58.083 CC lib/scsi/lun.o 00:02:58.343 LIB libspdk_blobfs.a 00:02:58.343 CC lib/nbd/nbd_rpc.o 00:02:58.343 SO libspdk_blobfs.so.10.0 00:02:58.343 CC lib/ftl/ftl_init.o 00:02:58.602 LIB libspdk_lvol.a 00:02:58.602 SO libspdk_lvol.so.10.0 00:02:58.602 SYMLINK libspdk_blobfs.so 00:02:58.602 CC lib/nvmf/nvmf.o 00:02:58.602 CC lib/scsi/port.o 00:02:58.602 CC lib/nvmf/nvmf_rpc.o 00:02:58.602 SYMLINK libspdk_lvol.so 00:02:58.602 CC lib/nvmf/transport.o 00:02:58.602 LIB libspdk_nbd.a 00:02:58.602 SO libspdk_nbd.so.7.0 00:02:58.602 CC lib/ftl/ftl_layout.o 00:02:58.602 CC lib/scsi/scsi.o 00:02:58.602 SYMLINK libspdk_nbd.so 00:02:58.602 CC lib/ublk/ublk_rpc.o 00:02:58.602 CC lib/nvmf/tcp.o 00:02:58.861 CC lib/nvmf/stubs.o 00:02:58.861 CC lib/scsi/scsi_bdev.o 00:02:58.861 LIB libspdk_ublk.a 00:02:58.861 SO libspdk_ublk.so.3.0 00:02:59.170 SYMLINK libspdk_ublk.so 00:02:59.170 CC lib/nvmf/mdns_server.o 00:02:59.170 CC lib/ftl/ftl_debug.o 00:02:59.432 CC lib/nvmf/rdma.o 00:02:59.432 CC lib/ftl/ftl_io.o 00:02:59.432 CC lib/nvmf/auth.o 00:02:59.432 CC lib/scsi/scsi_pr.o 00:02:59.432 CC lib/scsi/scsi_rpc.o 00:02:59.432 CC lib/scsi/task.o 00:02:59.691 CC lib/ftl/ftl_sb.o 00:02:59.691 CC lib/ftl/ftl_l2p.o 00:02:59.691 CC lib/ftl/ftl_l2p_flat.o 00:02:59.691 CC lib/ftl/ftl_nv_cache.o 00:02:59.691 CC lib/ftl/ftl_band.o 00:02:59.691 CC lib/ftl/ftl_band_ops.o 00:02:59.691 LIB libspdk_scsi.a 00:02:59.951 SO libspdk_scsi.so.9.0 00:02:59.951 CC lib/ftl/ftl_writer.o 00:02:59.951 CC lib/ftl/ftl_rq.o 00:02:59.951 SYMLINK libspdk_scsi.so 00:02:59.951 CC lib/ftl/ftl_reloc.o 00:03:00.208 CC lib/ftl/ftl_l2p_cache.o 00:03:00.208 CC lib/ftl/ftl_p2l.o 00:03:00.208 CC lib/ftl/ftl_p2l_log.o 00:03:00.208 CC lib/iscsi/conn.o 00:03:00.208 CC lib/vhost/vhost.o 00:03:00.466 CC lib/iscsi/init_grp.o 00:03:00.466 CC lib/iscsi/iscsi.o 00:03:00.466 CC lib/vhost/vhost_rpc.o 00:03:00.725 CC lib/vhost/vhost_scsi.o 00:03:00.725 CC lib/iscsi/param.o 00:03:00.725 CC lib/iscsi/portal_grp.o 00:03:00.725 CC lib/iscsi/tgt_node.o 00:03:00.987 CC lib/ftl/mngt/ftl_mngt.o 00:03:00.987 CC lib/iscsi/iscsi_subsystem.o 00:03:00.987 CC lib/iscsi/iscsi_rpc.o 00:03:00.987 CC lib/iscsi/task.o 00:03:01.246 CC lib/vhost/vhost_blk.o 00:03:01.246 CC lib/vhost/rte_vhost_user.o 00:03:01.246 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:01.246 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:01.246 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:01.505 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:01.505 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:01.505 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:01.505 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:01.505 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:01.505 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:01.764 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:01.764 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:01.764 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:01.764 CC lib/ftl/utils/ftl_conf.o 00:03:02.023 CC lib/ftl/utils/ftl_md.o 00:03:02.023 CC lib/ftl/utils/ftl_mempool.o 00:03:02.023 CC lib/ftl/utils/ftl_bitmap.o 00:03:02.023 CC lib/ftl/utils/ftl_property.o 00:03:02.023 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:02.023 LIB libspdk_nvmf.a 00:03:02.282 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:02.282 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:02.282 LIB libspdk_iscsi.a 00:03:02.282 SO libspdk_nvmf.so.20.0 00:03:02.282 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:02.282 SO libspdk_iscsi.so.8.0 00:03:02.282 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:02.282 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:02.282 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:02.282 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:02.541 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:02.541 SYMLINK libspdk_iscsi.so 00:03:02.541 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:02.541 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:02.541 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:02.541 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:02.541 LIB libspdk_vhost.a 00:03:02.541 CC lib/ftl/base/ftl_base_dev.o 00:03:02.541 SYMLINK libspdk_nvmf.so 00:03:02.541 CC lib/ftl/base/ftl_base_bdev.o 00:03:02.541 CC lib/ftl/ftl_trace.o 00:03:02.541 SO libspdk_vhost.so.8.0 00:03:02.541 SYMLINK libspdk_vhost.so 00:03:02.801 LIB libspdk_ftl.a 00:03:03.060 SO libspdk_ftl.so.9.0 00:03:03.319 SYMLINK libspdk_ftl.so 00:03:03.578 CC module/env_dpdk/env_dpdk_rpc.o 00:03:03.836 CC module/accel/error/accel_error.o 00:03:03.836 CC module/sock/posix/posix.o 00:03:03.836 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:03.836 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:03.836 CC module/fsdev/aio/fsdev_aio.o 00:03:03.836 CC module/keyring/linux/keyring.o 00:03:03.836 CC module/keyring/file/keyring.o 00:03:03.836 CC module/blob/bdev/blob_bdev.o 00:03:03.836 CC module/accel/ioat/accel_ioat.o 00:03:03.836 LIB libspdk_env_dpdk_rpc.a 00:03:03.836 SO libspdk_env_dpdk_rpc.so.6.0 00:03:03.836 SYMLINK libspdk_env_dpdk_rpc.so 00:03:03.836 CC module/accel/error/accel_error_rpc.o 00:03:03.836 CC module/keyring/linux/keyring_rpc.o 00:03:03.836 CC module/keyring/file/keyring_rpc.o 00:03:03.836 LIB libspdk_scheduler_dpdk_governor.a 00:03:04.093 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:04.093 LIB libspdk_scheduler_dynamic.a 00:03:04.093 CC module/accel/ioat/accel_ioat_rpc.o 00:03:04.093 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:04.093 SO libspdk_scheduler_dynamic.so.4.0 00:03:04.093 LIB libspdk_accel_error.a 00:03:04.093 LIB libspdk_keyring_linux.a 00:03:04.093 LIB libspdk_keyring_file.a 00:03:04.093 SO libspdk_keyring_linux.so.1.0 00:03:04.093 SYMLINK libspdk_scheduler_dynamic.so 00:03:04.093 SO libspdk_accel_error.so.2.0 00:03:04.093 SO libspdk_keyring_file.so.2.0 00:03:04.093 LIB libspdk_blob_bdev.a 00:03:04.093 LIB libspdk_accel_ioat.a 00:03:04.093 SYMLINK libspdk_keyring_linux.so 00:03:04.093 SYMLINK libspdk_accel_error.so 00:03:04.093 SO libspdk_blob_bdev.so.11.0 00:03:04.093 CC module/scheduler/gscheduler/gscheduler.o 00:03:04.093 SYMLINK libspdk_keyring_file.so 00:03:04.093 SO libspdk_accel_ioat.so.6.0 00:03:04.093 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:04.093 CC module/fsdev/aio/linux_aio_mgr.o 00:03:04.093 CC module/accel/dsa/accel_dsa.o 00:03:04.093 CC module/accel/dsa/accel_dsa_rpc.o 00:03:04.093 SYMLINK libspdk_blob_bdev.so 00:03:04.351 SYMLINK libspdk_accel_ioat.so 00:03:04.351 CC module/accel/iaa/accel_iaa.o 00:03:04.351 LIB libspdk_scheduler_gscheduler.a 00:03:04.351 CC module/accel/iaa/accel_iaa_rpc.o 00:03:04.351 SO libspdk_scheduler_gscheduler.so.4.0 00:03:04.351 SYMLINK libspdk_scheduler_gscheduler.so 00:03:04.610 CC module/bdev/delay/vbdev_delay.o 00:03:04.610 LIB libspdk_accel_iaa.a 00:03:04.610 CC module/blobfs/bdev/blobfs_bdev.o 00:03:04.610 LIB libspdk_accel_dsa.a 00:03:04.610 CC module/bdev/error/vbdev_error.o 00:03:04.610 SO libspdk_accel_iaa.so.3.0 00:03:04.610 SO libspdk_accel_dsa.so.5.0 00:03:04.610 SYMLINK libspdk_accel_iaa.so 00:03:04.610 CC module/bdev/gpt/gpt.o 00:03:04.610 CC module/bdev/lvol/vbdev_lvol.o 00:03:04.610 SYMLINK libspdk_accel_dsa.so 00:03:04.610 LIB libspdk_fsdev_aio.a 00:03:04.610 LIB libspdk_sock_posix.a 00:03:04.610 CC module/bdev/malloc/bdev_malloc.o 00:03:04.610 SO libspdk_sock_posix.so.6.0 00:03:04.610 SO libspdk_fsdev_aio.so.1.0 00:03:04.610 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:04.868 SYMLINK libspdk_fsdev_aio.so 00:03:04.868 CC module/bdev/nvme/bdev_nvme.o 00:03:04.868 CC module/bdev/null/bdev_null.o 00:03:04.868 SYMLINK libspdk_sock_posix.so 00:03:04.868 CC module/bdev/null/bdev_null_rpc.o 00:03:04.868 CC module/bdev/gpt/vbdev_gpt.o 00:03:04.868 CC module/bdev/error/vbdev_error_rpc.o 00:03:04.868 LIB libspdk_blobfs_bdev.a 00:03:04.868 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:04.868 SO libspdk_blobfs_bdev.so.6.0 00:03:04.868 CC module/bdev/passthru/vbdev_passthru.o 00:03:04.868 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:05.126 SYMLINK libspdk_blobfs_bdev.so 00:03:05.126 LIB libspdk_bdev_error.a 00:03:05.126 SO libspdk_bdev_error.so.6.0 00:03:05.126 LIB libspdk_bdev_delay.a 00:03:05.126 LIB libspdk_bdev_gpt.a 00:03:05.126 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:05.126 SYMLINK libspdk_bdev_error.so 00:03:05.126 SO libspdk_bdev_delay.so.6.0 00:03:05.126 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:05.126 LIB libspdk_bdev_null.a 00:03:05.126 SO libspdk_bdev_gpt.so.6.0 00:03:05.126 CC module/bdev/nvme/nvme_rpc.o 00:03:05.126 CC module/bdev/raid/bdev_raid.o 00:03:05.126 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:05.126 SO libspdk_bdev_null.so.6.0 00:03:05.126 SYMLINK libspdk_bdev_delay.so 00:03:05.126 SYMLINK libspdk_bdev_gpt.so 00:03:05.126 CC module/bdev/raid/bdev_raid_rpc.o 00:03:05.385 LIB libspdk_bdev_passthru.a 00:03:05.385 SYMLINK libspdk_bdev_null.so 00:03:05.385 LIB libspdk_bdev_malloc.a 00:03:05.385 SO libspdk_bdev_passthru.so.6.0 00:03:05.385 SO libspdk_bdev_malloc.so.6.0 00:03:05.385 CC module/bdev/split/vbdev_split.o 00:03:05.385 SYMLINK libspdk_bdev_passthru.so 00:03:05.385 CC module/bdev/nvme/bdev_mdns_client.o 00:03:05.385 SYMLINK libspdk_bdev_malloc.so 00:03:05.385 CC module/bdev/nvme/vbdev_opal.o 00:03:05.385 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:05.385 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:05.644 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:05.644 CC module/bdev/aio/bdev_aio.o 00:03:05.644 CC module/bdev/split/vbdev_split_rpc.o 00:03:05.644 LIB libspdk_bdev_lvol.a 00:03:05.644 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:05.644 SO libspdk_bdev_lvol.so.6.0 00:03:05.903 CC module/bdev/aio/bdev_aio_rpc.o 00:03:05.903 SYMLINK libspdk_bdev_lvol.so 00:03:05.903 LIB libspdk_bdev_split.a 00:03:05.903 CC module/bdev/raid/bdev_raid_sb.o 00:03:05.903 CC module/bdev/raid/raid0.o 00:03:05.903 SO libspdk_bdev_split.so.6.0 00:03:05.903 LIB libspdk_bdev_zone_block.a 00:03:05.903 SO libspdk_bdev_zone_block.so.6.0 00:03:05.903 SYMLINK libspdk_bdev_split.so 00:03:05.903 CC module/bdev/raid/raid1.o 00:03:05.903 CC module/bdev/raid/concat.o 00:03:05.903 SYMLINK libspdk_bdev_zone_block.so 00:03:05.903 CC module/bdev/ftl/bdev_ftl.o 00:03:06.162 LIB libspdk_bdev_aio.a 00:03:06.162 CC module/bdev/iscsi/bdev_iscsi.o 00:03:06.162 SO libspdk_bdev_aio.so.6.0 00:03:06.162 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:06.162 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:06.162 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:06.162 SYMLINK libspdk_bdev_aio.so 00:03:06.162 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:06.162 CC module/bdev/raid/raid5f.o 00:03:06.162 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:06.421 LIB libspdk_bdev_ftl.a 00:03:06.421 LIB libspdk_bdev_iscsi.a 00:03:06.421 SO libspdk_bdev_ftl.so.6.0 00:03:06.421 SO libspdk_bdev_iscsi.so.6.0 00:03:06.679 SYMLINK libspdk_bdev_ftl.so 00:03:06.680 SYMLINK libspdk_bdev_iscsi.so 00:03:06.680 LIB libspdk_bdev_virtio.a 00:03:06.680 SO libspdk_bdev_virtio.so.6.0 00:03:06.938 LIB libspdk_bdev_raid.a 00:03:06.938 SYMLINK libspdk_bdev_virtio.so 00:03:06.938 SO libspdk_bdev_raid.so.6.0 00:03:06.938 SYMLINK libspdk_bdev_raid.so 00:03:07.874 LIB libspdk_bdev_nvme.a 00:03:07.874 SO libspdk_bdev_nvme.so.7.1 00:03:08.133 SYMLINK libspdk_bdev_nvme.so 00:03:08.701 CC module/event/subsystems/fsdev/fsdev.o 00:03:08.701 CC module/event/subsystems/sock/sock.o 00:03:08.701 CC module/event/subsystems/keyring/keyring.o 00:03:08.701 CC module/event/subsystems/vmd/vmd.o 00:03:08.701 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:08.701 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:08.701 CC module/event/subsystems/iobuf/iobuf.o 00:03:08.701 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:08.701 CC module/event/subsystems/scheduler/scheduler.o 00:03:08.959 LIB libspdk_event_fsdev.a 00:03:08.959 LIB libspdk_event_vhost_blk.a 00:03:08.959 LIB libspdk_event_sock.a 00:03:08.959 LIB libspdk_event_keyring.a 00:03:08.959 SO libspdk_event_fsdev.so.1.0 00:03:08.959 LIB libspdk_event_vmd.a 00:03:08.959 SO libspdk_event_vhost_blk.so.3.0 00:03:08.959 SO libspdk_event_sock.so.5.0 00:03:08.959 SO libspdk_event_keyring.so.1.0 00:03:08.959 SO libspdk_event_vmd.so.6.0 00:03:08.959 LIB libspdk_event_iobuf.a 00:03:08.959 LIB libspdk_event_scheduler.a 00:03:08.959 SYMLINK libspdk_event_fsdev.so 00:03:08.959 SYMLINK libspdk_event_vhost_blk.so 00:03:08.959 SO libspdk_event_iobuf.so.3.0 00:03:08.959 SYMLINK libspdk_event_sock.so 00:03:08.959 SO libspdk_event_scheduler.so.4.0 00:03:08.959 SYMLINK libspdk_event_keyring.so 00:03:08.960 SYMLINK libspdk_event_vmd.so 00:03:08.960 SYMLINK libspdk_event_iobuf.so 00:03:08.960 SYMLINK libspdk_event_scheduler.so 00:03:09.528 CC module/event/subsystems/accel/accel.o 00:03:09.528 LIB libspdk_event_accel.a 00:03:09.528 SO libspdk_event_accel.so.6.0 00:03:09.787 SYMLINK libspdk_event_accel.so 00:03:10.045 CC module/event/subsystems/bdev/bdev.o 00:03:10.303 LIB libspdk_event_bdev.a 00:03:10.303 SO libspdk_event_bdev.so.6.0 00:03:10.303 SYMLINK libspdk_event_bdev.so 00:03:10.869 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:10.869 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:10.869 CC module/event/subsystems/scsi/scsi.o 00:03:10.869 CC module/event/subsystems/ublk/ublk.o 00:03:10.869 CC module/event/subsystems/nbd/nbd.o 00:03:10.869 LIB libspdk_event_ublk.a 00:03:10.869 LIB libspdk_event_scsi.a 00:03:10.869 SO libspdk_event_ublk.so.3.0 00:03:10.869 LIB libspdk_event_nbd.a 00:03:10.869 SO libspdk_event_scsi.so.6.0 00:03:10.869 LIB libspdk_event_nvmf.a 00:03:10.869 SO libspdk_event_nbd.so.6.0 00:03:10.869 SYMLINK libspdk_event_ublk.so 00:03:11.127 SO libspdk_event_nvmf.so.6.0 00:03:11.127 SYMLINK libspdk_event_scsi.so 00:03:11.127 SYMLINK libspdk_event_nbd.so 00:03:11.127 SYMLINK libspdk_event_nvmf.so 00:03:11.386 CC module/event/subsystems/iscsi/iscsi.o 00:03:11.386 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:11.645 LIB libspdk_event_vhost_scsi.a 00:03:11.645 LIB libspdk_event_iscsi.a 00:03:11.645 SO libspdk_event_vhost_scsi.so.3.0 00:03:11.645 SO libspdk_event_iscsi.so.6.0 00:03:11.645 SYMLINK libspdk_event_iscsi.so 00:03:11.645 SYMLINK libspdk_event_vhost_scsi.so 00:03:11.903 SO libspdk.so.6.0 00:03:11.903 SYMLINK libspdk.so 00:03:12.162 CC app/spdk_lspci/spdk_lspci.o 00:03:12.162 CC app/spdk_nvme_perf/perf.o 00:03:12.162 CC app/trace_record/trace_record.o 00:03:12.162 CXX app/trace/trace.o 00:03:12.162 CC app/spdk_nvme_identify/identify.o 00:03:12.162 CC app/iscsi_tgt/iscsi_tgt.o 00:03:12.162 CC app/nvmf_tgt/nvmf_main.o 00:03:12.162 CC app/spdk_tgt/spdk_tgt.o 00:03:12.162 CC test/thread/poller_perf/poller_perf.o 00:03:12.162 CC examples/util/zipf/zipf.o 00:03:12.162 LINK spdk_lspci 00:03:12.421 LINK poller_perf 00:03:12.421 LINK nvmf_tgt 00:03:12.421 LINK spdk_trace_record 00:03:12.421 LINK iscsi_tgt 00:03:12.421 LINK spdk_tgt 00:03:12.421 LINK zipf 00:03:12.421 CC app/spdk_nvme_discover/discovery_aer.o 00:03:12.421 LINK spdk_trace 00:03:12.681 CC app/spdk_top/spdk_top.o 00:03:12.681 LINK spdk_nvme_discover 00:03:12.681 CC examples/ioat/perf/perf.o 00:03:12.681 CC app/spdk_dd/spdk_dd.o 00:03:12.681 CC test/dma/test_dma/test_dma.o 00:03:12.681 CC examples/vmd/lsvmd/lsvmd.o 00:03:12.939 CC examples/idxd/perf/perf.o 00:03:12.939 CC app/fio/nvme/fio_plugin.o 00:03:12.939 LINK lsvmd 00:03:12.939 LINK ioat_perf 00:03:13.197 CC app/vhost/vhost.o 00:03:13.197 LINK spdk_nvme_perf 00:03:13.197 LINK spdk_nvme_identify 00:03:13.197 LINK spdk_dd 00:03:13.197 CC examples/vmd/led/led.o 00:03:13.197 LINK idxd_perf 00:03:13.197 CC examples/ioat/verify/verify.o 00:03:13.197 LINK vhost 00:03:13.456 LINK test_dma 00:03:13.456 LINK led 00:03:13.456 TEST_HEADER include/spdk/accel.h 00:03:13.456 TEST_HEADER include/spdk/accel_module.h 00:03:13.456 TEST_HEADER include/spdk/assert.h 00:03:13.456 TEST_HEADER include/spdk/barrier.h 00:03:13.456 TEST_HEADER include/spdk/base64.h 00:03:13.456 TEST_HEADER include/spdk/bdev.h 00:03:13.456 TEST_HEADER include/spdk/bdev_module.h 00:03:13.456 TEST_HEADER include/spdk/bdev_zone.h 00:03:13.456 TEST_HEADER include/spdk/bit_array.h 00:03:13.456 TEST_HEADER include/spdk/bit_pool.h 00:03:13.456 TEST_HEADER include/spdk/blob_bdev.h 00:03:13.456 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:13.456 TEST_HEADER include/spdk/blobfs.h 00:03:13.456 TEST_HEADER include/spdk/blob.h 00:03:13.456 TEST_HEADER include/spdk/conf.h 00:03:13.456 TEST_HEADER include/spdk/config.h 00:03:13.456 TEST_HEADER include/spdk/cpuset.h 00:03:13.456 TEST_HEADER include/spdk/crc16.h 00:03:13.456 TEST_HEADER include/spdk/crc32.h 00:03:13.456 TEST_HEADER include/spdk/crc64.h 00:03:13.456 TEST_HEADER include/spdk/dif.h 00:03:13.456 TEST_HEADER include/spdk/dma.h 00:03:13.456 TEST_HEADER include/spdk/endian.h 00:03:13.456 TEST_HEADER include/spdk/env_dpdk.h 00:03:13.456 TEST_HEADER include/spdk/env.h 00:03:13.456 TEST_HEADER include/spdk/event.h 00:03:13.456 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:13.456 CC test/app/bdev_svc/bdev_svc.o 00:03:13.456 TEST_HEADER include/spdk/fd.h 00:03:13.456 TEST_HEADER include/spdk/fd_group.h 00:03:13.456 LINK verify 00:03:13.456 TEST_HEADER include/spdk/file.h 00:03:13.456 TEST_HEADER include/spdk/fsdev.h 00:03:13.456 TEST_HEADER include/spdk/fsdev_module.h 00:03:13.456 TEST_HEADER include/spdk/ftl.h 00:03:13.456 TEST_HEADER include/spdk/fuse_dispatcher.h 00:03:13.456 TEST_HEADER include/spdk/gpt_spec.h 00:03:13.456 TEST_HEADER include/spdk/hexlify.h 00:03:13.456 TEST_HEADER include/spdk/histogram_data.h 00:03:13.456 TEST_HEADER include/spdk/idxd.h 00:03:13.456 TEST_HEADER include/spdk/idxd_spec.h 00:03:13.456 TEST_HEADER include/spdk/init.h 00:03:13.456 TEST_HEADER include/spdk/ioat.h 00:03:13.456 TEST_HEADER include/spdk/ioat_spec.h 00:03:13.456 TEST_HEADER include/spdk/iscsi_spec.h 00:03:13.456 TEST_HEADER include/spdk/json.h 00:03:13.456 TEST_HEADER include/spdk/jsonrpc.h 00:03:13.456 TEST_HEADER include/spdk/keyring.h 00:03:13.456 TEST_HEADER include/spdk/keyring_module.h 00:03:13.456 TEST_HEADER include/spdk/likely.h 00:03:13.714 TEST_HEADER include/spdk/log.h 00:03:13.714 TEST_HEADER include/spdk/lvol.h 00:03:13.714 TEST_HEADER include/spdk/md5.h 00:03:13.714 TEST_HEADER include/spdk/memory.h 00:03:13.714 LINK spdk_nvme 00:03:13.714 TEST_HEADER include/spdk/mmio.h 00:03:13.714 TEST_HEADER include/spdk/nbd.h 00:03:13.714 TEST_HEADER include/spdk/net.h 00:03:13.714 TEST_HEADER include/spdk/notify.h 00:03:13.714 TEST_HEADER include/spdk/nvme.h 00:03:13.714 TEST_HEADER include/spdk/nvme_intel.h 00:03:13.714 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:13.714 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:13.714 TEST_HEADER include/spdk/nvme_spec.h 00:03:13.714 TEST_HEADER include/spdk/nvme_zns.h 00:03:13.714 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:13.714 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:13.714 TEST_HEADER include/spdk/nvmf.h 00:03:13.714 TEST_HEADER include/spdk/nvmf_spec.h 00:03:13.714 TEST_HEADER include/spdk/nvmf_transport.h 00:03:13.714 TEST_HEADER include/spdk/opal.h 00:03:13.714 TEST_HEADER include/spdk/opal_spec.h 00:03:13.714 TEST_HEADER include/spdk/pci_ids.h 00:03:13.714 TEST_HEADER include/spdk/pipe.h 00:03:13.714 TEST_HEADER include/spdk/queue.h 00:03:13.714 TEST_HEADER include/spdk/reduce.h 00:03:13.714 TEST_HEADER include/spdk/rpc.h 00:03:13.714 TEST_HEADER include/spdk/scheduler.h 00:03:13.714 TEST_HEADER include/spdk/scsi.h 00:03:13.714 TEST_HEADER include/spdk/scsi_spec.h 00:03:13.714 CC examples/thread/thread/thread_ex.o 00:03:13.714 TEST_HEADER include/spdk/sock.h 00:03:13.714 TEST_HEADER include/spdk/stdinc.h 00:03:13.714 TEST_HEADER include/spdk/string.h 00:03:13.714 TEST_HEADER include/spdk/thread.h 00:03:13.714 TEST_HEADER include/spdk/trace.h 00:03:13.714 TEST_HEADER include/spdk/trace_parser.h 00:03:13.714 TEST_HEADER include/spdk/tree.h 00:03:13.714 TEST_HEADER include/spdk/ublk.h 00:03:13.714 TEST_HEADER include/spdk/util.h 00:03:13.714 TEST_HEADER include/spdk/uuid.h 00:03:13.714 TEST_HEADER include/spdk/version.h 00:03:13.714 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:13.714 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:13.714 TEST_HEADER include/spdk/vhost.h 00:03:13.714 TEST_HEADER include/spdk/vmd.h 00:03:13.714 TEST_HEADER include/spdk/xor.h 00:03:13.714 TEST_HEADER include/spdk/zipf.h 00:03:13.714 CXX test/cpp_headers/accel.o 00:03:13.714 CXX test/cpp_headers/accel_module.o 00:03:13.714 CC app/fio/bdev/fio_plugin.o 00:03:13.714 CC examples/sock/hello_world/hello_sock.o 00:03:13.714 LINK bdev_svc 00:03:13.714 LINK interrupt_tgt 00:03:13.714 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:13.714 LINK spdk_top 00:03:13.971 CXX test/cpp_headers/assert.o 00:03:13.971 LINK thread 00:03:13.971 CC test/env/vtophys/vtophys.o 00:03:13.971 CXX test/cpp_headers/barrier.o 00:03:13.971 CC test/env/mem_callbacks/mem_callbacks.o 00:03:13.971 LINK hello_sock 00:03:13.971 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:14.229 LINK vtophys 00:03:14.229 CXX test/cpp_headers/base64.o 00:03:14.229 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:14.229 CC test/event/event_perf/event_perf.o 00:03:14.229 CXX test/cpp_headers/bdev.o 00:03:14.229 LINK env_dpdk_post_init 00:03:14.229 LINK nvme_fuzz 00:03:14.229 LINK spdk_bdev 00:03:14.229 LINK event_perf 00:03:14.486 CXX test/cpp_headers/bdev_module.o 00:03:14.486 CC examples/accel/perf/accel_perf.o 00:03:14.486 CC examples/nvme/hello_world/hello_world.o 00:03:14.486 CC examples/nvme/reconnect/reconnect.o 00:03:14.486 CC examples/blob/hello_world/hello_blob.o 00:03:14.486 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:14.486 CXX test/cpp_headers/bdev_zone.o 00:03:14.486 CC test/env/memory/memory_ut.o 00:03:14.486 LINK mem_callbacks 00:03:14.486 CC test/event/reactor/reactor.o 00:03:14.744 CXX test/cpp_headers/bit_array.o 00:03:14.744 LINK hello_world 00:03:14.744 LINK reactor 00:03:14.744 LINK hello_blob 00:03:15.002 CXX test/cpp_headers/bit_pool.o 00:03:15.002 CC examples/nvme/arbitration/arbitration.o 00:03:15.002 LINK reconnect 00:03:15.002 LINK accel_perf 00:03:15.002 CC test/event/reactor_perf/reactor_perf.o 00:03:15.002 CXX test/cpp_headers/blob_bdev.o 00:03:15.260 CC test/nvme/aer/aer.o 00:03:15.260 LINK nvme_manage 00:03:15.260 CC examples/blob/cli/blobcli.o 00:03:15.260 LINK reactor_perf 00:03:15.260 CXX test/cpp_headers/blobfs_bdev.o 00:03:15.260 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:15.260 LINK arbitration 00:03:15.260 CC test/rpc_client/rpc_client_test.o 00:03:15.519 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:15.519 CXX test/cpp_headers/blobfs.o 00:03:15.519 LINK aer 00:03:15.519 CC test/event/app_repeat/app_repeat.o 00:03:15.519 CC test/accel/dif/dif.o 00:03:15.519 CC examples/nvme/hotplug/hotplug.o 00:03:15.519 LINK rpc_client_test 00:03:15.519 CXX test/cpp_headers/blob.o 00:03:15.777 LINK app_repeat 00:03:15.777 CC test/nvme/reset/reset.o 00:03:15.777 LINK blobcli 00:03:15.777 CXX test/cpp_headers/conf.o 00:03:15.777 CXX test/cpp_headers/config.o 00:03:15.777 LINK hotplug 00:03:15.777 CC test/event/scheduler/scheduler.o 00:03:15.777 LINK vhost_fuzz 00:03:16.036 LINK memory_ut 00:03:16.036 CXX test/cpp_headers/cpuset.o 00:03:16.036 CC test/app/histogram_perf/histogram_perf.o 00:03:16.036 CXX test/cpp_headers/crc16.o 00:03:16.036 LINK reset 00:03:16.036 LINK histogram_perf 00:03:16.036 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:16.036 CXX test/cpp_headers/crc32.o 00:03:16.036 LINK scheduler 00:03:16.036 CC examples/nvme/abort/abort.o 00:03:16.298 CC test/env/pci/pci_ut.o 00:03:16.298 CC test/nvme/sgl/sgl.o 00:03:16.298 LINK iscsi_fuzz 00:03:16.298 CXX test/cpp_headers/crc64.o 00:03:16.298 LINK cmb_copy 00:03:16.298 CC test/nvme/e2edp/nvme_dp.o 00:03:16.298 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:16.298 LINK dif 00:03:16.557 CXX test/cpp_headers/dif.o 00:03:16.557 CC test/nvme/overhead/overhead.o 00:03:16.557 LINK sgl 00:03:16.557 LINK abort 00:03:16.557 CC test/app/jsoncat/jsoncat.o 00:03:16.557 CXX test/cpp_headers/dma.o 00:03:16.557 LINK pci_ut 00:03:16.557 CC examples/bdev/hello_world/hello_bdev.o 00:03:16.816 LINK nvme_dp 00:03:16.816 LINK hello_fsdev 00:03:16.816 LINK jsoncat 00:03:16.816 LINK overhead 00:03:16.816 CXX test/cpp_headers/endian.o 00:03:16.816 CC test/blobfs/mkfs/mkfs.o 00:03:16.816 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:17.075 LINK hello_bdev 00:03:17.075 CXX test/cpp_headers/env_dpdk.o 00:03:17.075 CC test/nvme/err_injection/err_injection.o 00:03:17.075 CC test/app/stub/stub.o 00:03:17.075 CC test/lvol/esnap/esnap.o 00:03:17.075 LINK mkfs 00:03:17.075 CC test/nvme/startup/startup.o 00:03:17.075 CC test/nvme/reserve/reserve.o 00:03:17.075 LINK pmr_persistence 00:03:17.075 CC examples/bdev/bdevperf/bdevperf.o 00:03:17.075 CXX test/cpp_headers/env.o 00:03:17.334 LINK stub 00:03:17.334 LINK err_injection 00:03:17.334 CC test/nvme/simple_copy/simple_copy.o 00:03:17.334 LINK startup 00:03:17.334 LINK reserve 00:03:17.334 CXX test/cpp_headers/event.o 00:03:17.334 CC test/nvme/connect_stress/connect_stress.o 00:03:17.592 CC test/nvme/boot_partition/boot_partition.o 00:03:17.592 CC test/bdev/bdevio/bdevio.o 00:03:17.592 CXX test/cpp_headers/fd_group.o 00:03:17.592 CC test/nvme/compliance/nvme_compliance.o 00:03:17.592 LINK simple_copy 00:03:17.592 CC test/nvme/fused_ordering/fused_ordering.o 00:03:17.592 LINK connect_stress 00:03:17.592 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:17.592 LINK boot_partition 00:03:17.592 CXX test/cpp_headers/fd.o 00:03:17.851 LINK fused_ordering 00:03:17.851 LINK doorbell_aers 00:03:17.851 CXX test/cpp_headers/file.o 00:03:17.851 CC test/nvme/cuse/cuse.o 00:03:17.851 CC test/nvme/fdp/fdp.o 00:03:17.851 CXX test/cpp_headers/fsdev.o 00:03:17.851 LINK nvme_compliance 00:03:17.851 CXX test/cpp_headers/fsdev_module.o 00:03:17.851 LINK bdevio 00:03:17.851 CXX test/cpp_headers/ftl.o 00:03:18.110 CXX test/cpp_headers/fuse_dispatcher.o 00:03:18.110 CXX test/cpp_headers/gpt_spec.o 00:03:18.110 LINK bdevperf 00:03:18.110 CXX test/cpp_headers/hexlify.o 00:03:18.110 CXX test/cpp_headers/histogram_data.o 00:03:18.110 CXX test/cpp_headers/idxd.o 00:03:18.110 CXX test/cpp_headers/idxd_spec.o 00:03:18.110 CXX test/cpp_headers/init.o 00:03:18.110 CXX test/cpp_headers/ioat.o 00:03:18.369 LINK fdp 00:03:18.369 CXX test/cpp_headers/ioat_spec.o 00:03:18.369 CXX test/cpp_headers/iscsi_spec.o 00:03:18.369 CXX test/cpp_headers/json.o 00:03:18.369 CXX test/cpp_headers/jsonrpc.o 00:03:18.369 CXX test/cpp_headers/keyring.o 00:03:18.369 CXX test/cpp_headers/keyring_module.o 00:03:18.369 CXX test/cpp_headers/likely.o 00:03:18.369 CXX test/cpp_headers/log.o 00:03:18.369 CXX test/cpp_headers/lvol.o 00:03:18.369 CXX test/cpp_headers/md5.o 00:03:18.627 CC examples/nvmf/nvmf/nvmf.o 00:03:18.627 CXX test/cpp_headers/memory.o 00:03:18.627 CXX test/cpp_headers/mmio.o 00:03:18.627 CXX test/cpp_headers/nbd.o 00:03:18.627 CXX test/cpp_headers/net.o 00:03:18.627 CXX test/cpp_headers/notify.o 00:03:18.627 CXX test/cpp_headers/nvme.o 00:03:18.627 CXX test/cpp_headers/nvme_intel.o 00:03:18.627 CXX test/cpp_headers/nvme_ocssd.o 00:03:18.627 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:18.627 CXX test/cpp_headers/nvme_spec.o 00:03:18.627 CXX test/cpp_headers/nvme_zns.o 00:03:18.627 CXX test/cpp_headers/nvmf_cmd.o 00:03:18.885 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:18.885 CXX test/cpp_headers/nvmf.o 00:03:18.885 LINK nvmf 00:03:18.885 CXX test/cpp_headers/nvmf_spec.o 00:03:18.885 CXX test/cpp_headers/nvmf_transport.o 00:03:18.885 CXX test/cpp_headers/opal.o 00:03:18.885 CXX test/cpp_headers/opal_spec.o 00:03:18.885 CXX test/cpp_headers/pci_ids.o 00:03:18.885 CXX test/cpp_headers/pipe.o 00:03:18.885 CXX test/cpp_headers/queue.o 00:03:18.885 CXX test/cpp_headers/reduce.o 00:03:19.144 CXX test/cpp_headers/rpc.o 00:03:19.144 CXX test/cpp_headers/scheduler.o 00:03:19.144 CXX test/cpp_headers/scsi.o 00:03:19.144 CXX test/cpp_headers/scsi_spec.o 00:03:19.144 CXX test/cpp_headers/sock.o 00:03:19.144 CXX test/cpp_headers/stdinc.o 00:03:19.144 CXX test/cpp_headers/string.o 00:03:19.144 CXX test/cpp_headers/thread.o 00:03:19.144 CXX test/cpp_headers/trace.o 00:03:19.144 CXX test/cpp_headers/trace_parser.o 00:03:19.144 CXX test/cpp_headers/tree.o 00:03:19.144 CXX test/cpp_headers/ublk.o 00:03:19.402 CXX test/cpp_headers/util.o 00:03:19.402 CXX test/cpp_headers/uuid.o 00:03:19.402 CXX test/cpp_headers/version.o 00:03:19.402 CXX test/cpp_headers/vfio_user_pci.o 00:03:19.402 CXX test/cpp_headers/vfio_user_spec.o 00:03:19.402 CXX test/cpp_headers/vhost.o 00:03:19.402 LINK cuse 00:03:19.402 CXX test/cpp_headers/vmd.o 00:03:19.402 CXX test/cpp_headers/xor.o 00:03:19.402 CXX test/cpp_headers/zipf.o 00:03:23.599 LINK esnap 00:03:23.599 00:03:23.599 real 1m27.895s 00:03:23.599 user 7m42.198s 00:03:23.599 sys 1m36.741s 00:03:23.599 13:13:12 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:23.599 13:13:12 make -- common/autotest_common.sh@10 -- $ set +x 00:03:23.599 ************************************ 00:03:23.599 END TEST make 00:03:23.599 ************************************ 00:03:23.599 13:13:12 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:23.599 13:13:12 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:23.599 13:13:12 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:23.599 13:13:12 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:23.599 13:13:12 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:03:23.599 13:13:12 -- pm/common@44 -- $ pid=5471 00:03:23.599 13:13:12 -- pm/common@50 -- $ kill -TERM 5471 00:03:23.599 13:13:12 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:23.599 13:13:12 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:23.599 13:13:12 -- pm/common@44 -- $ pid=5473 00:03:23.599 13:13:12 -- pm/common@50 -- $ kill -TERM 5473 00:03:23.599 13:13:12 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:03:23.599 13:13:12 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:23.599 13:13:12 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:23.599 13:13:12 -- common/autotest_common.sh@1693 -- # lcov --version 00:03:23.599 13:13:12 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:23.599 13:13:12 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:23.599 13:13:12 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:23.599 13:13:12 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:23.599 13:13:12 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:23.599 13:13:12 -- scripts/common.sh@336 -- # IFS=.-: 00:03:23.599 13:13:12 -- scripts/common.sh@336 -- # read -ra ver1 00:03:23.599 13:13:12 -- scripts/common.sh@337 -- # IFS=.-: 00:03:23.599 13:13:12 -- scripts/common.sh@337 -- # read -ra ver2 00:03:23.599 13:13:12 -- scripts/common.sh@338 -- # local 'op=<' 00:03:23.599 13:13:12 -- scripts/common.sh@340 -- # ver1_l=2 00:03:23.599 13:13:12 -- scripts/common.sh@341 -- # ver2_l=1 00:03:23.599 13:13:12 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:23.599 13:13:12 -- scripts/common.sh@344 -- # case "$op" in 00:03:23.599 13:13:12 -- scripts/common.sh@345 -- # : 1 00:03:23.599 13:13:12 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:23.599 13:13:12 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:23.599 13:13:12 -- scripts/common.sh@365 -- # decimal 1 00:03:23.599 13:13:12 -- scripts/common.sh@353 -- # local d=1 00:03:23.599 13:13:12 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:23.599 13:13:12 -- scripts/common.sh@355 -- # echo 1 00:03:23.599 13:13:12 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:23.599 13:13:12 -- scripts/common.sh@366 -- # decimal 2 00:03:23.599 13:13:12 -- scripts/common.sh@353 -- # local d=2 00:03:23.599 13:13:12 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:23.599 13:13:12 -- scripts/common.sh@355 -- # echo 2 00:03:23.599 13:13:12 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:23.599 13:13:12 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:23.599 13:13:12 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:23.599 13:13:12 -- scripts/common.sh@368 -- # return 0 00:03:23.599 13:13:12 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:23.599 13:13:12 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:23.599 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:23.599 --rc genhtml_branch_coverage=1 00:03:23.599 --rc genhtml_function_coverage=1 00:03:23.599 --rc genhtml_legend=1 00:03:23.599 --rc geninfo_all_blocks=1 00:03:23.599 --rc geninfo_unexecuted_blocks=1 00:03:23.599 00:03:23.599 ' 00:03:23.599 13:13:12 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:23.599 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:23.599 --rc genhtml_branch_coverage=1 00:03:23.599 --rc genhtml_function_coverage=1 00:03:23.599 --rc genhtml_legend=1 00:03:23.599 --rc geninfo_all_blocks=1 00:03:23.599 --rc geninfo_unexecuted_blocks=1 00:03:23.599 00:03:23.599 ' 00:03:23.599 13:13:12 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:23.599 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:23.599 --rc genhtml_branch_coverage=1 00:03:23.599 --rc genhtml_function_coverage=1 00:03:23.599 --rc genhtml_legend=1 00:03:23.599 --rc geninfo_all_blocks=1 00:03:23.599 --rc geninfo_unexecuted_blocks=1 00:03:23.599 00:03:23.599 ' 00:03:23.599 13:13:12 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:23.599 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:23.599 --rc genhtml_branch_coverage=1 00:03:23.599 --rc genhtml_function_coverage=1 00:03:23.599 --rc genhtml_legend=1 00:03:23.599 --rc geninfo_all_blocks=1 00:03:23.599 --rc geninfo_unexecuted_blocks=1 00:03:23.599 00:03:23.599 ' 00:03:23.599 13:13:12 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:23.599 13:13:12 -- nvmf/common.sh@7 -- # uname -s 00:03:23.600 13:13:12 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:23.600 13:13:12 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:23.600 13:13:12 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:23.600 13:13:12 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:23.600 13:13:12 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:23.600 13:13:12 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:23.600 13:13:12 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:23.600 13:13:12 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:23.600 13:13:12 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:23.600 13:13:12 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:23.600 13:13:12 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c667c019-11b1-4d83-ab1b-f127f05fffc9 00:03:23.600 13:13:12 -- nvmf/common.sh@18 -- # NVME_HOSTID=c667c019-11b1-4d83-ab1b-f127f05fffc9 00:03:23.600 13:13:12 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:23.600 13:13:12 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:23.600 13:13:12 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:23.600 13:13:12 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:23.600 13:13:12 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:23.600 13:13:12 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:23.600 13:13:12 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:23.600 13:13:12 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:23.600 13:13:12 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:23.600 13:13:12 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:23.600 13:13:12 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:23.600 13:13:12 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:23.600 13:13:12 -- paths/export.sh@5 -- # export PATH 00:03:23.600 13:13:12 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:23.600 13:13:12 -- nvmf/common.sh@51 -- # : 0 00:03:23.600 13:13:12 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:23.600 13:13:12 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:23.600 13:13:12 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:23.600 13:13:12 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:23.600 13:13:12 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:23.600 13:13:12 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:23.600 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:23.600 13:13:12 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:23.600 13:13:12 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:23.600 13:13:12 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:23.600 13:13:12 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:23.600 13:13:12 -- spdk/autotest.sh@32 -- # uname -s 00:03:23.600 13:13:12 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:23.600 13:13:12 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:23.600 13:13:12 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:23.600 13:13:12 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:23.600 13:13:12 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:23.600 13:13:12 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:23.600 13:13:12 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:23.600 13:13:12 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:23.600 13:13:12 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:23.600 13:13:12 -- spdk/autotest.sh@48 -- # udevadm_pid=54463 00:03:23.600 13:13:12 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:23.600 13:13:12 -- pm/common@17 -- # local monitor 00:03:23.600 13:13:12 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:23.600 13:13:12 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:23.600 13:13:12 -- pm/common@25 -- # sleep 1 00:03:23.600 13:13:12 -- pm/common@21 -- # date +%s 00:03:23.600 13:13:12 -- pm/common@21 -- # date +%s 00:03:23.600 13:13:12 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1731849192 00:03:23.600 13:13:12 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1731849192 00:03:23.600 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1731849192_collect-cpu-load.pm.log 00:03:23.600 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1731849192_collect-vmstat.pm.log 00:03:24.979 13:13:13 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:24.980 13:13:13 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:24.980 13:13:13 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:24.980 13:13:13 -- common/autotest_common.sh@10 -- # set +x 00:03:24.980 13:13:13 -- spdk/autotest.sh@59 -- # create_test_list 00:03:24.980 13:13:13 -- common/autotest_common.sh@752 -- # xtrace_disable 00:03:24.980 13:13:13 -- common/autotest_common.sh@10 -- # set +x 00:03:24.980 13:13:13 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:24.980 13:13:13 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:24.980 13:13:13 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:03:24.980 13:13:13 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:24.980 13:13:13 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:03:24.980 13:13:13 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:24.980 13:13:13 -- common/autotest_common.sh@1457 -- # uname 00:03:24.980 13:13:13 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:03:24.980 13:13:13 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:24.980 13:13:13 -- common/autotest_common.sh@1477 -- # uname 00:03:24.980 13:13:13 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:03:24.980 13:13:13 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:24.980 13:13:13 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:24.980 lcov: LCOV version 1.15 00:03:24.980 13:13:13 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:03:39.863 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:39.863 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:03:54.743 13:13:42 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:03:54.743 13:13:42 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:54.743 13:13:42 -- common/autotest_common.sh@10 -- # set +x 00:03:54.743 13:13:42 -- spdk/autotest.sh@78 -- # rm -f 00:03:54.743 13:13:42 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:54.743 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:54.743 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:03:54.743 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:03:54.743 13:13:43 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:03:54.743 13:13:43 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:03:54.743 13:13:43 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:03:54.743 13:13:43 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:03:54.743 13:13:43 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:03:54.743 13:13:43 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:03:54.743 13:13:43 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:03:54.743 13:13:43 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:54.743 13:13:43 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:54.743 13:13:43 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:03:54.743 13:13:43 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:03:54.743 13:13:43 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:03:54.743 13:13:43 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:54.743 13:13:43 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:54.743 13:13:43 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:03:54.743 13:13:43 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n2 00:03:54.743 13:13:43 -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:03:54.743 13:13:43 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:03:54.743 13:13:43 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:54.743 13:13:43 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:03:54.743 13:13:43 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n3 00:03:54.743 13:13:43 -- common/autotest_common.sh@1650 -- # local device=nvme1n3 00:03:54.743 13:13:43 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:03:54.743 13:13:43 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:54.743 13:13:43 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:03:54.743 13:13:43 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:54.743 13:13:43 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:54.743 13:13:43 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:03:54.743 13:13:43 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:03:54.743 13:13:43 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:54.743 No valid GPT data, bailing 00:03:54.743 13:13:43 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:54.743 13:13:43 -- scripts/common.sh@394 -- # pt= 00:03:54.743 13:13:43 -- scripts/common.sh@395 -- # return 1 00:03:54.743 13:13:43 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:54.743 1+0 records in 00:03:54.743 1+0 records out 00:03:54.743 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00445727 s, 235 MB/s 00:03:54.743 13:13:43 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:54.743 13:13:43 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:54.743 13:13:43 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:03:54.743 13:13:43 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:03:54.743 13:13:43 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:03:54.743 No valid GPT data, bailing 00:03:54.743 13:13:43 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:03:54.743 13:13:43 -- scripts/common.sh@394 -- # pt= 00:03:54.743 13:13:43 -- scripts/common.sh@395 -- # return 1 00:03:54.743 13:13:43 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:03:54.743 1+0 records in 00:03:54.743 1+0 records out 00:03:54.743 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00629971 s, 166 MB/s 00:03:54.743 13:13:43 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:54.743 13:13:43 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:54.743 13:13:43 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:03:54.743 13:13:43 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:03:54.743 13:13:43 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:03:55.003 No valid GPT data, bailing 00:03:55.003 13:13:44 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:03:55.003 13:13:44 -- scripts/common.sh@394 -- # pt= 00:03:55.003 13:13:44 -- scripts/common.sh@395 -- # return 1 00:03:55.003 13:13:44 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:03:55.003 1+0 records in 00:03:55.003 1+0 records out 00:03:55.003 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00613935 s, 171 MB/s 00:03:55.003 13:13:44 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:55.003 13:13:44 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:55.003 13:13:44 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:03:55.003 13:13:44 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:03:55.003 13:13:44 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:03:55.003 No valid GPT data, bailing 00:03:55.003 13:13:44 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:03:55.003 13:13:44 -- scripts/common.sh@394 -- # pt= 00:03:55.003 13:13:44 -- scripts/common.sh@395 -- # return 1 00:03:55.003 13:13:44 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:03:55.003 1+0 records in 00:03:55.003 1+0 records out 00:03:55.003 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00840616 s, 125 MB/s 00:03:55.003 13:13:44 -- spdk/autotest.sh@105 -- # sync 00:03:55.264 13:13:44 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:55.264 13:13:44 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:55.264 13:13:44 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:58.561 13:13:47 -- spdk/autotest.sh@111 -- # uname -s 00:03:58.561 13:13:47 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:03:58.561 13:13:47 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:03:58.561 13:13:47 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:03:58.821 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:58.821 Hugepages 00:03:58.821 node hugesize free / total 00:03:58.821 node0 1048576kB 0 / 0 00:03:58.821 node0 2048kB 0 / 0 00:03:58.821 00:03:58.821 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:58.821 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:03:59.080 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:03:59.080 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:03:59.080 13:13:48 -- spdk/autotest.sh@117 -- # uname -s 00:03:59.080 13:13:48 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:03:59.080 13:13:48 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:03:59.080 13:13:48 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:00.017 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:00.017 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:00.017 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:00.276 13:13:49 -- common/autotest_common.sh@1517 -- # sleep 1 00:04:01.214 13:13:50 -- common/autotest_common.sh@1518 -- # bdfs=() 00:04:01.214 13:13:50 -- common/autotest_common.sh@1518 -- # local bdfs 00:04:01.214 13:13:50 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:04:01.214 13:13:50 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:04:01.214 13:13:50 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:01.214 13:13:50 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:01.214 13:13:50 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:01.214 13:13:50 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:01.214 13:13:50 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:01.214 13:13:50 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:04:01.214 13:13:50 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:01.214 13:13:50 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:01.784 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:01.784 Waiting for block devices as requested 00:04:01.784 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:04:01.784 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:04:02.045 13:13:51 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:02.045 13:13:51 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:04:02.045 13:13:51 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:02.045 13:13:51 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:04:02.045 13:13:51 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:02.045 13:13:51 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:04:02.045 13:13:51 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:02.045 13:13:51 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:04:02.045 13:13:51 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:04:02.045 13:13:51 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:04:02.045 13:13:51 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:04:02.045 13:13:51 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:02.045 13:13:51 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:02.045 13:13:51 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:02.045 13:13:51 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:02.045 13:13:51 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:02.045 13:13:51 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:04:02.045 13:13:51 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:02.045 13:13:51 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:02.045 13:13:51 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:02.045 13:13:51 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:02.045 13:13:51 -- common/autotest_common.sh@1543 -- # continue 00:04:02.045 13:13:51 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:02.045 13:13:51 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:04:02.045 13:13:51 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:04:02.045 13:13:51 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:02.045 13:13:51 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:02.045 13:13:51 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:04:02.045 13:13:51 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:02.045 13:13:51 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:04:02.045 13:13:51 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:04:02.045 13:13:51 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:04:02.045 13:13:51 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:04:02.045 13:13:51 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:02.045 13:13:51 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:02.045 13:13:51 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:02.045 13:13:51 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:02.045 13:13:51 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:02.045 13:13:51 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:04:02.045 13:13:51 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:02.045 13:13:51 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:02.045 13:13:51 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:02.045 13:13:51 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:02.045 13:13:51 -- common/autotest_common.sh@1543 -- # continue 00:04:02.045 13:13:51 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:02.045 13:13:51 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:02.045 13:13:51 -- common/autotest_common.sh@10 -- # set +x 00:04:02.045 13:13:51 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:02.045 13:13:51 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:02.045 13:13:51 -- common/autotest_common.sh@10 -- # set +x 00:04:02.045 13:13:51 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:02.983 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:02.983 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:02.983 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:03.242 13:13:52 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:03.242 13:13:52 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:03.242 13:13:52 -- common/autotest_common.sh@10 -- # set +x 00:04:03.242 13:13:52 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:03.242 13:13:52 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:04:03.242 13:13:52 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:04:03.242 13:13:52 -- common/autotest_common.sh@1563 -- # bdfs=() 00:04:03.242 13:13:52 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:04:03.242 13:13:52 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:04:03.242 13:13:52 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:04:03.242 13:13:52 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:04:03.242 13:13:52 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:03.242 13:13:52 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:03.242 13:13:52 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:03.242 13:13:52 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:03.242 13:13:52 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:03.242 13:13:52 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:04:03.242 13:13:52 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:03.242 13:13:52 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:03.242 13:13:52 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:04:03.242 13:13:52 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:03.242 13:13:52 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:03.242 13:13:52 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:03.242 13:13:52 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:04:03.243 13:13:52 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:03.243 13:13:52 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:03.243 13:13:52 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:04:03.243 13:13:52 -- common/autotest_common.sh@1572 -- # return 0 00:04:03.243 13:13:52 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:04:03.243 13:13:52 -- common/autotest_common.sh@1580 -- # return 0 00:04:03.243 13:13:52 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:03.243 13:13:52 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:03.243 13:13:52 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:03.243 13:13:52 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:03.243 13:13:52 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:03.243 13:13:52 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:03.243 13:13:52 -- common/autotest_common.sh@10 -- # set +x 00:04:03.243 13:13:52 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:03.243 13:13:52 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:03.243 13:13:52 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:03.243 13:13:52 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:03.243 13:13:52 -- common/autotest_common.sh@10 -- # set +x 00:04:03.243 ************************************ 00:04:03.243 START TEST env 00:04:03.243 ************************************ 00:04:03.243 13:13:52 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:03.502 * Looking for test storage... 00:04:03.502 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:03.502 13:13:52 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:03.502 13:13:52 env -- common/autotest_common.sh@1693 -- # lcov --version 00:04:03.502 13:13:52 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:03.502 13:13:52 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:03.502 13:13:52 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:03.502 13:13:52 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:03.502 13:13:52 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:03.502 13:13:52 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:03.502 13:13:52 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:03.502 13:13:52 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:03.502 13:13:52 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:03.502 13:13:52 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:03.502 13:13:52 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:03.502 13:13:52 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:03.502 13:13:52 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:03.502 13:13:52 env -- scripts/common.sh@344 -- # case "$op" in 00:04:03.502 13:13:52 env -- scripts/common.sh@345 -- # : 1 00:04:03.502 13:13:52 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:03.502 13:13:52 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:03.502 13:13:52 env -- scripts/common.sh@365 -- # decimal 1 00:04:03.502 13:13:52 env -- scripts/common.sh@353 -- # local d=1 00:04:03.502 13:13:52 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:03.502 13:13:52 env -- scripts/common.sh@355 -- # echo 1 00:04:03.502 13:13:52 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:03.502 13:13:52 env -- scripts/common.sh@366 -- # decimal 2 00:04:03.502 13:13:52 env -- scripts/common.sh@353 -- # local d=2 00:04:03.502 13:13:52 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:03.502 13:13:52 env -- scripts/common.sh@355 -- # echo 2 00:04:03.502 13:13:52 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:03.502 13:13:52 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:03.502 13:13:52 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:03.502 13:13:52 env -- scripts/common.sh@368 -- # return 0 00:04:03.502 13:13:52 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:03.502 13:13:52 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:03.502 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:03.502 --rc genhtml_branch_coverage=1 00:04:03.502 --rc genhtml_function_coverage=1 00:04:03.502 --rc genhtml_legend=1 00:04:03.502 --rc geninfo_all_blocks=1 00:04:03.502 --rc geninfo_unexecuted_blocks=1 00:04:03.502 00:04:03.502 ' 00:04:03.502 13:13:52 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:03.502 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:03.502 --rc genhtml_branch_coverage=1 00:04:03.502 --rc genhtml_function_coverage=1 00:04:03.502 --rc genhtml_legend=1 00:04:03.502 --rc geninfo_all_blocks=1 00:04:03.502 --rc geninfo_unexecuted_blocks=1 00:04:03.502 00:04:03.502 ' 00:04:03.502 13:13:52 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:03.502 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:03.502 --rc genhtml_branch_coverage=1 00:04:03.502 --rc genhtml_function_coverage=1 00:04:03.502 --rc genhtml_legend=1 00:04:03.502 --rc geninfo_all_blocks=1 00:04:03.502 --rc geninfo_unexecuted_blocks=1 00:04:03.502 00:04:03.502 ' 00:04:03.502 13:13:52 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:03.502 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:03.502 --rc genhtml_branch_coverage=1 00:04:03.502 --rc genhtml_function_coverage=1 00:04:03.502 --rc genhtml_legend=1 00:04:03.502 --rc geninfo_all_blocks=1 00:04:03.502 --rc geninfo_unexecuted_blocks=1 00:04:03.502 00:04:03.502 ' 00:04:03.502 13:13:52 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:03.502 13:13:52 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:03.502 13:13:52 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:03.502 13:13:52 env -- common/autotest_common.sh@10 -- # set +x 00:04:03.502 ************************************ 00:04:03.502 START TEST env_memory 00:04:03.502 ************************************ 00:04:03.502 13:13:52 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:03.502 00:04:03.502 00:04:03.502 CUnit - A unit testing framework for C - Version 2.1-3 00:04:03.502 http://cunit.sourceforge.net/ 00:04:03.502 00:04:03.502 00:04:03.502 Suite: memory 00:04:03.761 Test: alloc and free memory map ...[2024-11-17 13:13:52.752150] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:03.761 passed 00:04:03.761 Test: mem map translation ...[2024-11-17 13:13:52.801319] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:03.761 [2024-11-17 13:13:52.801466] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:03.761 [2024-11-17 13:13:52.801613] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:03.762 [2024-11-17 13:13:52.801733] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:03.762 passed 00:04:03.762 Test: mem map registration ...[2024-11-17 13:13:52.874905] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:03.762 [2024-11-17 13:13:52.875056] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:03.762 passed 00:04:03.762 Test: mem map adjacent registrations ...passed 00:04:03.762 00:04:03.762 Run Summary: Type Total Ran Passed Failed Inactive 00:04:03.762 suites 1 1 n/a 0 0 00:04:03.762 tests 4 4 4 0 0 00:04:03.762 asserts 152 152 152 0 n/a 00:04:03.762 00:04:03.762 Elapsed time = 0.273 seconds 00:04:04.021 00:04:04.021 real 0m0.337s 00:04:04.021 user 0m0.289s 00:04:04.021 sys 0m0.034s 00:04:04.021 13:13:53 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:04.021 13:13:53 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:04.021 ************************************ 00:04:04.021 END TEST env_memory 00:04:04.021 ************************************ 00:04:04.021 13:13:53 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:04.021 13:13:53 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:04.021 13:13:53 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:04.021 13:13:53 env -- common/autotest_common.sh@10 -- # set +x 00:04:04.021 ************************************ 00:04:04.021 START TEST env_vtophys 00:04:04.021 ************************************ 00:04:04.021 13:13:53 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:04.021 EAL: lib.eal log level changed from notice to debug 00:04:04.021 EAL: Detected lcore 0 as core 0 on socket 0 00:04:04.021 EAL: Detected lcore 1 as core 0 on socket 0 00:04:04.021 EAL: Detected lcore 2 as core 0 on socket 0 00:04:04.021 EAL: Detected lcore 3 as core 0 on socket 0 00:04:04.021 EAL: Detected lcore 4 as core 0 on socket 0 00:04:04.021 EAL: Detected lcore 5 as core 0 on socket 0 00:04:04.021 EAL: Detected lcore 6 as core 0 on socket 0 00:04:04.021 EAL: Detected lcore 7 as core 0 on socket 0 00:04:04.021 EAL: Detected lcore 8 as core 0 on socket 0 00:04:04.021 EAL: Detected lcore 9 as core 0 on socket 0 00:04:04.021 EAL: Maximum logical cores by configuration: 128 00:04:04.021 EAL: Detected CPU lcores: 10 00:04:04.021 EAL: Detected NUMA nodes: 1 00:04:04.021 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:04.021 EAL: Detected shared linkage of DPDK 00:04:04.021 EAL: No shared files mode enabled, IPC will be disabled 00:04:04.021 EAL: Selected IOVA mode 'PA' 00:04:04.021 EAL: Probing VFIO support... 00:04:04.021 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:04.021 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:04.021 EAL: Ask a virtual area of 0x2e000 bytes 00:04:04.021 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:04.021 EAL: Setting up physically contiguous memory... 00:04:04.021 EAL: Setting maximum number of open files to 524288 00:04:04.021 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:04.021 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:04.021 EAL: Ask a virtual area of 0x61000 bytes 00:04:04.021 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:04.021 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:04.021 EAL: Ask a virtual area of 0x400000000 bytes 00:04:04.021 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:04.021 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:04.021 EAL: Ask a virtual area of 0x61000 bytes 00:04:04.021 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:04.021 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:04.021 EAL: Ask a virtual area of 0x400000000 bytes 00:04:04.021 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:04.021 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:04.021 EAL: Ask a virtual area of 0x61000 bytes 00:04:04.021 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:04.021 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:04.021 EAL: Ask a virtual area of 0x400000000 bytes 00:04:04.021 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:04.021 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:04.021 EAL: Ask a virtual area of 0x61000 bytes 00:04:04.021 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:04.021 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:04.021 EAL: Ask a virtual area of 0x400000000 bytes 00:04:04.021 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:04.021 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:04.021 EAL: Hugepages will be freed exactly as allocated. 00:04:04.021 EAL: No shared files mode enabled, IPC is disabled 00:04:04.021 EAL: No shared files mode enabled, IPC is disabled 00:04:04.280 EAL: TSC frequency is ~2290000 KHz 00:04:04.280 EAL: Main lcore 0 is ready (tid=7fb2a3c0fa40;cpuset=[0]) 00:04:04.280 EAL: Trying to obtain current memory policy. 00:04:04.280 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:04.280 EAL: Restoring previous memory policy: 0 00:04:04.280 EAL: request: mp_malloc_sync 00:04:04.280 EAL: No shared files mode enabled, IPC is disabled 00:04:04.280 EAL: Heap on socket 0 was expanded by 2MB 00:04:04.280 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:04.280 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:04.280 EAL: Mem event callback 'spdk:(nil)' registered 00:04:04.280 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:04.280 00:04:04.280 00:04:04.280 CUnit - A unit testing framework for C - Version 2.1-3 00:04:04.280 http://cunit.sourceforge.net/ 00:04:04.280 00:04:04.280 00:04:04.280 Suite: components_suite 00:04:04.539 Test: vtophys_malloc_test ...passed 00:04:04.539 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:04.539 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:04.539 EAL: Restoring previous memory policy: 4 00:04:04.539 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.539 EAL: request: mp_malloc_sync 00:04:04.539 EAL: No shared files mode enabled, IPC is disabled 00:04:04.539 EAL: Heap on socket 0 was expanded by 4MB 00:04:04.539 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.539 EAL: request: mp_malloc_sync 00:04:04.539 EAL: No shared files mode enabled, IPC is disabled 00:04:04.539 EAL: Heap on socket 0 was shrunk by 4MB 00:04:04.539 EAL: Trying to obtain current memory policy. 00:04:04.539 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:04.539 EAL: Restoring previous memory policy: 4 00:04:04.539 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.539 EAL: request: mp_malloc_sync 00:04:04.539 EAL: No shared files mode enabled, IPC is disabled 00:04:04.539 EAL: Heap on socket 0 was expanded by 6MB 00:04:04.539 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.539 EAL: request: mp_malloc_sync 00:04:04.539 EAL: No shared files mode enabled, IPC is disabled 00:04:04.539 EAL: Heap on socket 0 was shrunk by 6MB 00:04:04.539 EAL: Trying to obtain current memory policy. 00:04:04.539 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:04.539 EAL: Restoring previous memory policy: 4 00:04:04.539 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.539 EAL: request: mp_malloc_sync 00:04:04.539 EAL: No shared files mode enabled, IPC is disabled 00:04:04.539 EAL: Heap on socket 0 was expanded by 10MB 00:04:04.798 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.798 EAL: request: mp_malloc_sync 00:04:04.798 EAL: No shared files mode enabled, IPC is disabled 00:04:04.798 EAL: Heap on socket 0 was shrunk by 10MB 00:04:04.798 EAL: Trying to obtain current memory policy. 00:04:04.798 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:04.798 EAL: Restoring previous memory policy: 4 00:04:04.798 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.798 EAL: request: mp_malloc_sync 00:04:04.798 EAL: No shared files mode enabled, IPC is disabled 00:04:04.798 EAL: Heap on socket 0 was expanded by 18MB 00:04:04.798 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.798 EAL: request: mp_malloc_sync 00:04:04.798 EAL: No shared files mode enabled, IPC is disabled 00:04:04.798 EAL: Heap on socket 0 was shrunk by 18MB 00:04:04.798 EAL: Trying to obtain current memory policy. 00:04:04.798 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:04.798 EAL: Restoring previous memory policy: 4 00:04:04.798 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.798 EAL: request: mp_malloc_sync 00:04:04.798 EAL: No shared files mode enabled, IPC is disabled 00:04:04.798 EAL: Heap on socket 0 was expanded by 34MB 00:04:04.798 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.798 EAL: request: mp_malloc_sync 00:04:04.798 EAL: No shared files mode enabled, IPC is disabled 00:04:04.798 EAL: Heap on socket 0 was shrunk by 34MB 00:04:04.798 EAL: Trying to obtain current memory policy. 00:04:04.798 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:04.798 EAL: Restoring previous memory policy: 4 00:04:04.798 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.798 EAL: request: mp_malloc_sync 00:04:04.798 EAL: No shared files mode enabled, IPC is disabled 00:04:04.798 EAL: Heap on socket 0 was expanded by 66MB 00:04:05.056 EAL: Calling mem event callback 'spdk:(nil)' 00:04:05.056 EAL: request: mp_malloc_sync 00:04:05.056 EAL: No shared files mode enabled, IPC is disabled 00:04:05.056 EAL: Heap on socket 0 was shrunk by 66MB 00:04:05.056 EAL: Trying to obtain current memory policy. 00:04:05.056 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:05.315 EAL: Restoring previous memory policy: 4 00:04:05.315 EAL: Calling mem event callback 'spdk:(nil)' 00:04:05.315 EAL: request: mp_malloc_sync 00:04:05.315 EAL: No shared files mode enabled, IPC is disabled 00:04:05.315 EAL: Heap on socket 0 was expanded by 130MB 00:04:05.315 EAL: Calling mem event callback 'spdk:(nil)' 00:04:05.573 EAL: request: mp_malloc_sync 00:04:05.573 EAL: No shared files mode enabled, IPC is disabled 00:04:05.574 EAL: Heap on socket 0 was shrunk by 130MB 00:04:05.574 EAL: Trying to obtain current memory policy. 00:04:05.574 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:05.832 EAL: Restoring previous memory policy: 4 00:04:05.832 EAL: Calling mem event callback 'spdk:(nil)' 00:04:05.832 EAL: request: mp_malloc_sync 00:04:05.832 EAL: No shared files mode enabled, IPC is disabled 00:04:05.832 EAL: Heap on socket 0 was expanded by 258MB 00:04:06.400 EAL: Calling mem event callback 'spdk:(nil)' 00:04:06.400 EAL: request: mp_malloc_sync 00:04:06.400 EAL: No shared files mode enabled, IPC is disabled 00:04:06.400 EAL: Heap on socket 0 was shrunk by 258MB 00:04:06.658 EAL: Trying to obtain current memory policy. 00:04:06.658 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:06.949 EAL: Restoring previous memory policy: 4 00:04:06.949 EAL: Calling mem event callback 'spdk:(nil)' 00:04:06.949 EAL: request: mp_malloc_sync 00:04:06.949 EAL: No shared files mode enabled, IPC is disabled 00:04:06.949 EAL: Heap on socket 0 was expanded by 514MB 00:04:07.882 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.882 EAL: request: mp_malloc_sync 00:04:07.882 EAL: No shared files mode enabled, IPC is disabled 00:04:07.882 EAL: Heap on socket 0 was shrunk by 514MB 00:04:08.817 EAL: Trying to obtain current memory policy. 00:04:08.817 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:09.075 EAL: Restoring previous memory policy: 4 00:04:09.075 EAL: Calling mem event callback 'spdk:(nil)' 00:04:09.075 EAL: request: mp_malloc_sync 00:04:09.075 EAL: No shared files mode enabled, IPC is disabled 00:04:09.075 EAL: Heap on socket 0 was expanded by 1026MB 00:04:10.978 EAL: Calling mem event callback 'spdk:(nil)' 00:04:11.237 EAL: request: mp_malloc_sync 00:04:11.237 EAL: No shared files mode enabled, IPC is disabled 00:04:11.237 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:13.148 passed 00:04:13.148 00:04:13.148 Run Summary: Type Total Ran Passed Failed Inactive 00:04:13.148 suites 1 1 n/a 0 0 00:04:13.148 tests 2 2 2 0 0 00:04:13.148 asserts 5754 5754 5754 0 n/a 00:04:13.148 00:04:13.148 Elapsed time = 8.493 seconds 00:04:13.148 EAL: Calling mem event callback 'spdk:(nil)' 00:04:13.148 EAL: request: mp_malloc_sync 00:04:13.148 EAL: No shared files mode enabled, IPC is disabled 00:04:13.148 EAL: Heap on socket 0 was shrunk by 2MB 00:04:13.148 EAL: No shared files mode enabled, IPC is disabled 00:04:13.148 EAL: No shared files mode enabled, IPC is disabled 00:04:13.148 EAL: No shared files mode enabled, IPC is disabled 00:04:13.148 00:04:13.148 real 0m8.849s 00:04:13.148 user 0m7.816s 00:04:13.148 sys 0m0.865s 00:04:13.148 ************************************ 00:04:13.148 13:14:01 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:13.148 13:14:01 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:13.148 END TEST env_vtophys 00:04:13.148 ************************************ 00:04:13.148 13:14:01 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:13.148 13:14:01 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:13.148 13:14:01 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:13.148 13:14:01 env -- common/autotest_common.sh@10 -- # set +x 00:04:13.148 ************************************ 00:04:13.148 START TEST env_pci 00:04:13.148 ************************************ 00:04:13.148 13:14:01 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:13.148 00:04:13.148 00:04:13.148 CUnit - A unit testing framework for C - Version 2.1-3 00:04:13.148 http://cunit.sourceforge.net/ 00:04:13.148 00:04:13.148 00:04:13.148 Suite: pci 00:04:13.148 Test: pci_hook ...[2024-11-17 13:14:02.029128] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 56790 has claimed it 00:04:13.148 passed 00:04:13.148 00:04:13.148 Run Summary: Type Total Ran Passed Failed Inactive 00:04:13.148 suites 1 1 n/a 0 0 00:04:13.148 tests 1 1 1 0 0 00:04:13.148 asserts 25 25 25 0 n/a 00:04:13.148 00:04:13.148 Elapsed time = 0.005 seconds 00:04:13.148 EAL: Cannot find device (10000:00:01.0) 00:04:13.148 EAL: Failed to attach device on primary process 00:04:13.148 00:04:13.148 real 0m0.096s 00:04:13.148 user 0m0.045s 00:04:13.148 sys 0m0.050s 00:04:13.148 ************************************ 00:04:13.148 END TEST env_pci 00:04:13.148 ************************************ 00:04:13.148 13:14:02 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:13.148 13:14:02 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:13.148 13:14:02 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:13.148 13:14:02 env -- env/env.sh@15 -- # uname 00:04:13.148 13:14:02 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:13.148 13:14:02 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:13.148 13:14:02 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:13.148 13:14:02 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:04:13.148 13:14:02 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:13.148 13:14:02 env -- common/autotest_common.sh@10 -- # set +x 00:04:13.148 ************************************ 00:04:13.148 START TEST env_dpdk_post_init 00:04:13.148 ************************************ 00:04:13.148 13:14:02 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:13.148 EAL: Detected CPU lcores: 10 00:04:13.148 EAL: Detected NUMA nodes: 1 00:04:13.148 EAL: Detected shared linkage of DPDK 00:04:13.148 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:13.148 EAL: Selected IOVA mode 'PA' 00:04:13.148 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:13.408 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:04:13.408 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:04:13.408 Starting DPDK initialization... 00:04:13.409 Starting SPDK post initialization... 00:04:13.409 SPDK NVMe probe 00:04:13.409 Attaching to 0000:00:10.0 00:04:13.409 Attaching to 0000:00:11.0 00:04:13.409 Attached to 0000:00:10.0 00:04:13.409 Attached to 0000:00:11.0 00:04:13.409 Cleaning up... 00:04:13.409 00:04:13.409 real 0m0.279s 00:04:13.409 user 0m0.098s 00:04:13.409 sys 0m0.081s 00:04:13.409 ************************************ 00:04:13.409 END TEST env_dpdk_post_init 00:04:13.409 ************************************ 00:04:13.409 13:14:02 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:13.409 13:14:02 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:13.409 13:14:02 env -- env/env.sh@26 -- # uname 00:04:13.409 13:14:02 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:13.409 13:14:02 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:13.409 13:14:02 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:13.409 13:14:02 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:13.409 13:14:02 env -- common/autotest_common.sh@10 -- # set +x 00:04:13.409 ************************************ 00:04:13.409 START TEST env_mem_callbacks 00:04:13.409 ************************************ 00:04:13.409 13:14:02 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:13.409 EAL: Detected CPU lcores: 10 00:04:13.409 EAL: Detected NUMA nodes: 1 00:04:13.409 EAL: Detected shared linkage of DPDK 00:04:13.409 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:13.409 EAL: Selected IOVA mode 'PA' 00:04:13.669 00:04:13.669 00:04:13.669 CUnit - A unit testing framework for C - Version 2.1-3 00:04:13.669 http://cunit.sourceforge.net/ 00:04:13.669 00:04:13.669 00:04:13.669 Suite: memory 00:04:13.669 Test: test ... 00:04:13.669 register 0x200000200000 2097152 00:04:13.669 malloc 3145728 00:04:13.669 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:13.669 register 0x200000400000 4194304 00:04:13.669 buf 0x2000004fffc0 len 3145728 PASSED 00:04:13.669 malloc 64 00:04:13.669 buf 0x2000004ffec0 len 64 PASSED 00:04:13.669 malloc 4194304 00:04:13.669 register 0x200000800000 6291456 00:04:13.669 buf 0x2000009fffc0 len 4194304 PASSED 00:04:13.669 free 0x2000004fffc0 3145728 00:04:13.669 free 0x2000004ffec0 64 00:04:13.669 unregister 0x200000400000 4194304 PASSED 00:04:13.669 free 0x2000009fffc0 4194304 00:04:13.669 unregister 0x200000800000 6291456 PASSED 00:04:13.669 malloc 8388608 00:04:13.669 register 0x200000400000 10485760 00:04:13.669 buf 0x2000005fffc0 len 8388608 PASSED 00:04:13.669 free 0x2000005fffc0 8388608 00:04:13.669 unregister 0x200000400000 10485760 PASSED 00:04:13.669 passed 00:04:13.669 00:04:13.669 Run Summary: Type Total Ran Passed Failed Inactive 00:04:13.669 suites 1 1 n/a 0 0 00:04:13.669 tests 1 1 1 0 0 00:04:13.669 asserts 15 15 15 0 n/a 00:04:13.669 00:04:13.669 Elapsed time = 0.082 seconds 00:04:13.669 00:04:13.669 real 0m0.278s 00:04:13.669 user 0m0.115s 00:04:13.669 sys 0m0.061s 00:04:13.669 13:14:02 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:13.669 ************************************ 00:04:13.669 END TEST env_mem_callbacks 00:04:13.669 ************************************ 00:04:13.669 13:14:02 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:13.669 ************************************ 00:04:13.669 END TEST env 00:04:13.669 ************************************ 00:04:13.669 00:04:13.669 real 0m10.405s 00:04:13.669 user 0m8.576s 00:04:13.669 sys 0m1.460s 00:04:13.669 13:14:02 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:13.669 13:14:02 env -- common/autotest_common.sh@10 -- # set +x 00:04:13.669 13:14:02 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:13.669 13:14:02 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:13.669 13:14:02 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:13.669 13:14:02 -- common/autotest_common.sh@10 -- # set +x 00:04:13.929 ************************************ 00:04:13.929 START TEST rpc 00:04:13.929 ************************************ 00:04:13.929 13:14:02 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:13.929 * Looking for test storage... 00:04:13.929 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:13.929 13:14:03 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:13.929 13:14:03 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:13.929 13:14:03 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:13.929 13:14:03 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:13.929 13:14:03 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:13.929 13:14:03 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:13.929 13:14:03 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:13.929 13:14:03 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:13.929 13:14:03 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:13.929 13:14:03 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:13.929 13:14:03 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:13.929 13:14:03 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:13.929 13:14:03 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:13.929 13:14:03 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:13.929 13:14:03 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:13.929 13:14:03 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:13.929 13:14:03 rpc -- scripts/common.sh@345 -- # : 1 00:04:13.929 13:14:03 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:13.929 13:14:03 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:13.929 13:14:03 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:13.929 13:14:03 rpc -- scripts/common.sh@353 -- # local d=1 00:04:13.929 13:14:03 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:13.929 13:14:03 rpc -- scripts/common.sh@355 -- # echo 1 00:04:13.929 13:14:03 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:13.929 13:14:03 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:13.929 13:14:03 rpc -- scripts/common.sh@353 -- # local d=2 00:04:13.929 13:14:03 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:13.929 13:14:03 rpc -- scripts/common.sh@355 -- # echo 2 00:04:13.929 13:14:03 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:13.929 13:14:03 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:13.929 13:14:03 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:13.929 13:14:03 rpc -- scripts/common.sh@368 -- # return 0 00:04:13.929 13:14:03 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:13.929 13:14:03 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:13.929 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:13.929 --rc genhtml_branch_coverage=1 00:04:13.929 --rc genhtml_function_coverage=1 00:04:13.929 --rc genhtml_legend=1 00:04:13.929 --rc geninfo_all_blocks=1 00:04:13.929 --rc geninfo_unexecuted_blocks=1 00:04:13.929 00:04:13.929 ' 00:04:13.929 13:14:03 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:13.929 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:13.929 --rc genhtml_branch_coverage=1 00:04:13.929 --rc genhtml_function_coverage=1 00:04:13.929 --rc genhtml_legend=1 00:04:13.929 --rc geninfo_all_blocks=1 00:04:13.929 --rc geninfo_unexecuted_blocks=1 00:04:13.929 00:04:13.929 ' 00:04:13.929 13:14:03 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:13.929 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:13.929 --rc genhtml_branch_coverage=1 00:04:13.929 --rc genhtml_function_coverage=1 00:04:13.929 --rc genhtml_legend=1 00:04:13.929 --rc geninfo_all_blocks=1 00:04:13.929 --rc geninfo_unexecuted_blocks=1 00:04:13.929 00:04:13.929 ' 00:04:13.929 13:14:03 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:13.929 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:13.929 --rc genhtml_branch_coverage=1 00:04:13.929 --rc genhtml_function_coverage=1 00:04:13.929 --rc genhtml_legend=1 00:04:13.929 --rc geninfo_all_blocks=1 00:04:13.929 --rc geninfo_unexecuted_blocks=1 00:04:13.929 00:04:13.929 ' 00:04:13.929 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:13.929 13:14:03 rpc -- rpc/rpc.sh@65 -- # spdk_pid=56917 00:04:13.929 13:14:03 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:13.929 13:14:03 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:13.929 13:14:03 rpc -- rpc/rpc.sh@67 -- # waitforlisten 56917 00:04:13.929 13:14:03 rpc -- common/autotest_common.sh@835 -- # '[' -z 56917 ']' 00:04:13.929 13:14:03 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:13.929 13:14:03 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:13.929 13:14:03 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:13.929 13:14:03 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:13.929 13:14:03 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:14.189 [2024-11-17 13:14:03.213047] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:04:14.189 [2024-11-17 13:14:03.213266] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56917 ] 00:04:14.189 [2024-11-17 13:14:03.386512] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:14.449 [2024-11-17 13:14:03.496920] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:14.449 [2024-11-17 13:14:03.497061] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 56917' to capture a snapshot of events at runtime. 00:04:14.449 [2024-11-17 13:14:03.497102] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:14.449 [2024-11-17 13:14:03.497134] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:14.449 [2024-11-17 13:14:03.497154] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid56917 for offline analysis/debug. 00:04:14.449 [2024-11-17 13:14:03.498548] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:15.387 13:14:04 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:15.387 13:14:04 rpc -- common/autotest_common.sh@868 -- # return 0 00:04:15.387 13:14:04 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:15.387 13:14:04 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:15.387 13:14:04 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:15.387 13:14:04 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:15.387 13:14:04 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:15.387 13:14:04 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:15.387 13:14:04 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:15.387 ************************************ 00:04:15.387 START TEST rpc_integrity 00:04:15.387 ************************************ 00:04:15.387 13:14:04 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:15.387 13:14:04 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:15.387 13:14:04 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:15.387 13:14:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:15.387 13:14:04 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:15.387 13:14:04 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:15.387 13:14:04 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:15.387 13:14:04 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:15.387 13:14:04 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:15.387 13:14:04 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:15.387 13:14:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:15.387 13:14:04 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:15.387 13:14:04 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:15.387 13:14:04 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:15.387 13:14:04 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:15.387 13:14:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:15.387 13:14:04 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:15.387 13:14:04 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:15.387 { 00:04:15.387 "name": "Malloc0", 00:04:15.387 "aliases": [ 00:04:15.387 "6d94b47d-5c08-4cad-9670-a1e94e7950e9" 00:04:15.387 ], 00:04:15.387 "product_name": "Malloc disk", 00:04:15.387 "block_size": 512, 00:04:15.387 "num_blocks": 16384, 00:04:15.387 "uuid": "6d94b47d-5c08-4cad-9670-a1e94e7950e9", 00:04:15.387 "assigned_rate_limits": { 00:04:15.387 "rw_ios_per_sec": 0, 00:04:15.387 "rw_mbytes_per_sec": 0, 00:04:15.387 "r_mbytes_per_sec": 0, 00:04:15.387 "w_mbytes_per_sec": 0 00:04:15.387 }, 00:04:15.387 "claimed": false, 00:04:15.387 "zoned": false, 00:04:15.387 "supported_io_types": { 00:04:15.387 "read": true, 00:04:15.387 "write": true, 00:04:15.387 "unmap": true, 00:04:15.387 "flush": true, 00:04:15.387 "reset": true, 00:04:15.387 "nvme_admin": false, 00:04:15.387 "nvme_io": false, 00:04:15.387 "nvme_io_md": false, 00:04:15.387 "write_zeroes": true, 00:04:15.387 "zcopy": true, 00:04:15.387 "get_zone_info": false, 00:04:15.387 "zone_management": false, 00:04:15.387 "zone_append": false, 00:04:15.387 "compare": false, 00:04:15.387 "compare_and_write": false, 00:04:15.387 "abort": true, 00:04:15.387 "seek_hole": false, 00:04:15.387 "seek_data": false, 00:04:15.387 "copy": true, 00:04:15.387 "nvme_iov_md": false 00:04:15.387 }, 00:04:15.387 "memory_domains": [ 00:04:15.387 { 00:04:15.387 "dma_device_id": "system", 00:04:15.387 "dma_device_type": 1 00:04:15.387 }, 00:04:15.387 { 00:04:15.387 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:15.387 "dma_device_type": 2 00:04:15.387 } 00:04:15.387 ], 00:04:15.387 "driver_specific": {} 00:04:15.387 } 00:04:15.387 ]' 00:04:15.387 13:14:04 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:15.387 13:14:04 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:15.387 13:14:04 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:15.388 13:14:04 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:15.388 13:14:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:15.388 [2024-11-17 13:14:04.511117] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:15.388 [2024-11-17 13:14:04.511278] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:15.388 [2024-11-17 13:14:04.511312] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:04:15.388 [2024-11-17 13:14:04.511326] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:15.388 [2024-11-17 13:14:04.513737] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:15.388 [2024-11-17 13:14:04.513781] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:15.388 Passthru0 00:04:15.388 13:14:04 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:15.388 13:14:04 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:15.388 13:14:04 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:15.388 13:14:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:15.388 13:14:04 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:15.388 13:14:04 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:15.388 { 00:04:15.388 "name": "Malloc0", 00:04:15.388 "aliases": [ 00:04:15.388 "6d94b47d-5c08-4cad-9670-a1e94e7950e9" 00:04:15.388 ], 00:04:15.388 "product_name": "Malloc disk", 00:04:15.388 "block_size": 512, 00:04:15.388 "num_blocks": 16384, 00:04:15.388 "uuid": "6d94b47d-5c08-4cad-9670-a1e94e7950e9", 00:04:15.388 "assigned_rate_limits": { 00:04:15.388 "rw_ios_per_sec": 0, 00:04:15.388 "rw_mbytes_per_sec": 0, 00:04:15.388 "r_mbytes_per_sec": 0, 00:04:15.388 "w_mbytes_per_sec": 0 00:04:15.388 }, 00:04:15.388 "claimed": true, 00:04:15.388 "claim_type": "exclusive_write", 00:04:15.388 "zoned": false, 00:04:15.388 "supported_io_types": { 00:04:15.388 "read": true, 00:04:15.388 "write": true, 00:04:15.388 "unmap": true, 00:04:15.388 "flush": true, 00:04:15.388 "reset": true, 00:04:15.388 "nvme_admin": false, 00:04:15.388 "nvme_io": false, 00:04:15.388 "nvme_io_md": false, 00:04:15.388 "write_zeroes": true, 00:04:15.388 "zcopy": true, 00:04:15.388 "get_zone_info": false, 00:04:15.388 "zone_management": false, 00:04:15.388 "zone_append": false, 00:04:15.388 "compare": false, 00:04:15.388 "compare_and_write": false, 00:04:15.388 "abort": true, 00:04:15.388 "seek_hole": false, 00:04:15.388 "seek_data": false, 00:04:15.388 "copy": true, 00:04:15.388 "nvme_iov_md": false 00:04:15.388 }, 00:04:15.388 "memory_domains": [ 00:04:15.388 { 00:04:15.388 "dma_device_id": "system", 00:04:15.388 "dma_device_type": 1 00:04:15.388 }, 00:04:15.388 { 00:04:15.388 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:15.388 "dma_device_type": 2 00:04:15.388 } 00:04:15.388 ], 00:04:15.388 "driver_specific": {} 00:04:15.388 }, 00:04:15.388 { 00:04:15.388 "name": "Passthru0", 00:04:15.388 "aliases": [ 00:04:15.388 "9222a5c4-a90f-5a08-9343-32ee75600db5" 00:04:15.388 ], 00:04:15.388 "product_name": "passthru", 00:04:15.388 "block_size": 512, 00:04:15.388 "num_blocks": 16384, 00:04:15.388 "uuid": "9222a5c4-a90f-5a08-9343-32ee75600db5", 00:04:15.388 "assigned_rate_limits": { 00:04:15.388 "rw_ios_per_sec": 0, 00:04:15.388 "rw_mbytes_per_sec": 0, 00:04:15.388 "r_mbytes_per_sec": 0, 00:04:15.388 "w_mbytes_per_sec": 0 00:04:15.388 }, 00:04:15.388 "claimed": false, 00:04:15.388 "zoned": false, 00:04:15.388 "supported_io_types": { 00:04:15.388 "read": true, 00:04:15.388 "write": true, 00:04:15.388 "unmap": true, 00:04:15.388 "flush": true, 00:04:15.388 "reset": true, 00:04:15.388 "nvme_admin": false, 00:04:15.388 "nvme_io": false, 00:04:15.388 "nvme_io_md": false, 00:04:15.388 "write_zeroes": true, 00:04:15.388 "zcopy": true, 00:04:15.388 "get_zone_info": false, 00:04:15.388 "zone_management": false, 00:04:15.388 "zone_append": false, 00:04:15.388 "compare": false, 00:04:15.388 "compare_and_write": false, 00:04:15.388 "abort": true, 00:04:15.388 "seek_hole": false, 00:04:15.388 "seek_data": false, 00:04:15.388 "copy": true, 00:04:15.388 "nvme_iov_md": false 00:04:15.388 }, 00:04:15.388 "memory_domains": [ 00:04:15.388 { 00:04:15.388 "dma_device_id": "system", 00:04:15.388 "dma_device_type": 1 00:04:15.388 }, 00:04:15.388 { 00:04:15.388 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:15.388 "dma_device_type": 2 00:04:15.388 } 00:04:15.388 ], 00:04:15.388 "driver_specific": { 00:04:15.388 "passthru": { 00:04:15.388 "name": "Passthru0", 00:04:15.388 "base_bdev_name": "Malloc0" 00:04:15.388 } 00:04:15.388 } 00:04:15.388 } 00:04:15.388 ]' 00:04:15.388 13:14:04 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:15.388 13:14:04 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:15.388 13:14:04 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:15.388 13:14:04 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:15.388 13:14:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:15.388 13:14:04 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:15.388 13:14:04 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:15.388 13:14:04 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:15.388 13:14:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:15.647 13:14:04 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:15.647 13:14:04 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:15.647 13:14:04 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:15.647 13:14:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:15.647 13:14:04 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:15.647 13:14:04 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:15.647 13:14:04 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:15.647 13:14:04 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:15.647 00:04:15.647 real 0m0.321s 00:04:15.647 user 0m0.171s 00:04:15.647 sys 0m0.048s 00:04:15.647 13:14:04 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:15.647 13:14:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:15.647 ************************************ 00:04:15.647 END TEST rpc_integrity 00:04:15.647 ************************************ 00:04:15.647 13:14:04 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:15.647 13:14:04 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:15.647 13:14:04 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:15.647 13:14:04 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:15.647 ************************************ 00:04:15.647 START TEST rpc_plugins 00:04:15.647 ************************************ 00:04:15.647 13:14:04 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:04:15.647 13:14:04 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:15.648 13:14:04 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:15.648 13:14:04 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:15.648 13:14:04 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:15.648 13:14:04 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:15.648 13:14:04 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:15.648 13:14:04 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:15.648 13:14:04 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:15.648 13:14:04 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:15.648 13:14:04 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:15.648 { 00:04:15.648 "name": "Malloc1", 00:04:15.648 "aliases": [ 00:04:15.648 "e6cf3117-bf23-421f-9d25-5b7db154f4df" 00:04:15.648 ], 00:04:15.648 "product_name": "Malloc disk", 00:04:15.648 "block_size": 4096, 00:04:15.648 "num_blocks": 256, 00:04:15.648 "uuid": "e6cf3117-bf23-421f-9d25-5b7db154f4df", 00:04:15.648 "assigned_rate_limits": { 00:04:15.648 "rw_ios_per_sec": 0, 00:04:15.648 "rw_mbytes_per_sec": 0, 00:04:15.648 "r_mbytes_per_sec": 0, 00:04:15.648 "w_mbytes_per_sec": 0 00:04:15.648 }, 00:04:15.648 "claimed": false, 00:04:15.648 "zoned": false, 00:04:15.648 "supported_io_types": { 00:04:15.648 "read": true, 00:04:15.648 "write": true, 00:04:15.648 "unmap": true, 00:04:15.648 "flush": true, 00:04:15.648 "reset": true, 00:04:15.648 "nvme_admin": false, 00:04:15.648 "nvme_io": false, 00:04:15.648 "nvme_io_md": false, 00:04:15.648 "write_zeroes": true, 00:04:15.648 "zcopy": true, 00:04:15.648 "get_zone_info": false, 00:04:15.648 "zone_management": false, 00:04:15.648 "zone_append": false, 00:04:15.648 "compare": false, 00:04:15.648 "compare_and_write": false, 00:04:15.648 "abort": true, 00:04:15.648 "seek_hole": false, 00:04:15.648 "seek_data": false, 00:04:15.648 "copy": true, 00:04:15.648 "nvme_iov_md": false 00:04:15.648 }, 00:04:15.648 "memory_domains": [ 00:04:15.648 { 00:04:15.648 "dma_device_id": "system", 00:04:15.648 "dma_device_type": 1 00:04:15.648 }, 00:04:15.648 { 00:04:15.648 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:15.648 "dma_device_type": 2 00:04:15.648 } 00:04:15.648 ], 00:04:15.648 "driver_specific": {} 00:04:15.648 } 00:04:15.648 ]' 00:04:15.648 13:14:04 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:15.648 13:14:04 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:15.648 13:14:04 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:15.648 13:14:04 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:15.648 13:14:04 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:15.648 13:14:04 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:15.648 13:14:04 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:15.648 13:14:04 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:15.648 13:14:04 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:15.648 13:14:04 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:15.648 13:14:04 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:15.648 13:14:04 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:15.907 ************************************ 00:04:15.907 END TEST rpc_plugins 00:04:15.908 ************************************ 00:04:15.908 13:14:04 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:15.908 00:04:15.908 real 0m0.159s 00:04:15.908 user 0m0.087s 00:04:15.908 sys 0m0.028s 00:04:15.908 13:14:04 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:15.908 13:14:04 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:15.908 13:14:04 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:15.908 13:14:04 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:15.908 13:14:04 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:15.908 13:14:04 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:15.908 ************************************ 00:04:15.908 START TEST rpc_trace_cmd_test 00:04:15.908 ************************************ 00:04:15.908 13:14:04 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:04:15.908 13:14:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:15.908 13:14:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:15.908 13:14:04 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:15.908 13:14:04 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:15.908 13:14:04 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:15.908 13:14:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:15.908 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid56917", 00:04:15.908 "tpoint_group_mask": "0x8", 00:04:15.908 "iscsi_conn": { 00:04:15.908 "mask": "0x2", 00:04:15.908 "tpoint_mask": "0x0" 00:04:15.908 }, 00:04:15.908 "scsi": { 00:04:15.908 "mask": "0x4", 00:04:15.908 "tpoint_mask": "0x0" 00:04:15.908 }, 00:04:15.908 "bdev": { 00:04:15.908 "mask": "0x8", 00:04:15.908 "tpoint_mask": "0xffffffffffffffff" 00:04:15.908 }, 00:04:15.908 "nvmf_rdma": { 00:04:15.908 "mask": "0x10", 00:04:15.908 "tpoint_mask": "0x0" 00:04:15.908 }, 00:04:15.908 "nvmf_tcp": { 00:04:15.908 "mask": "0x20", 00:04:15.908 "tpoint_mask": "0x0" 00:04:15.908 }, 00:04:15.908 "ftl": { 00:04:15.908 "mask": "0x40", 00:04:15.908 "tpoint_mask": "0x0" 00:04:15.908 }, 00:04:15.908 "blobfs": { 00:04:15.908 "mask": "0x80", 00:04:15.908 "tpoint_mask": "0x0" 00:04:15.908 }, 00:04:15.908 "dsa": { 00:04:15.908 "mask": "0x200", 00:04:15.908 "tpoint_mask": "0x0" 00:04:15.908 }, 00:04:15.908 "thread": { 00:04:15.908 "mask": "0x400", 00:04:15.908 "tpoint_mask": "0x0" 00:04:15.908 }, 00:04:15.908 "nvme_pcie": { 00:04:15.908 "mask": "0x800", 00:04:15.908 "tpoint_mask": "0x0" 00:04:15.908 }, 00:04:15.908 "iaa": { 00:04:15.908 "mask": "0x1000", 00:04:15.908 "tpoint_mask": "0x0" 00:04:15.908 }, 00:04:15.908 "nvme_tcp": { 00:04:15.908 "mask": "0x2000", 00:04:15.908 "tpoint_mask": "0x0" 00:04:15.908 }, 00:04:15.908 "bdev_nvme": { 00:04:15.908 "mask": "0x4000", 00:04:15.908 "tpoint_mask": "0x0" 00:04:15.908 }, 00:04:15.908 "sock": { 00:04:15.908 "mask": "0x8000", 00:04:15.908 "tpoint_mask": "0x0" 00:04:15.908 }, 00:04:15.908 "blob": { 00:04:15.908 "mask": "0x10000", 00:04:15.908 "tpoint_mask": "0x0" 00:04:15.908 }, 00:04:15.908 "bdev_raid": { 00:04:15.908 "mask": "0x20000", 00:04:15.908 "tpoint_mask": "0x0" 00:04:15.908 }, 00:04:15.908 "scheduler": { 00:04:15.908 "mask": "0x40000", 00:04:15.908 "tpoint_mask": "0x0" 00:04:15.908 } 00:04:15.908 }' 00:04:15.908 13:14:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:15.908 13:14:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:15.908 13:14:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:15.908 13:14:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:15.908 13:14:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:15.908 13:14:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:15.908 13:14:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:16.168 13:14:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:16.168 13:14:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:16.168 ************************************ 00:04:16.168 END TEST rpc_trace_cmd_test 00:04:16.168 ************************************ 00:04:16.168 13:14:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:16.168 00:04:16.168 real 0m0.252s 00:04:16.168 user 0m0.201s 00:04:16.168 sys 0m0.038s 00:04:16.168 13:14:05 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:16.168 13:14:05 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:16.168 13:14:05 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:16.168 13:14:05 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:16.168 13:14:05 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:16.168 13:14:05 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:16.168 13:14:05 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:16.168 13:14:05 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:16.168 ************************************ 00:04:16.168 START TEST rpc_daemon_integrity 00:04:16.168 ************************************ 00:04:16.168 13:14:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:16.169 13:14:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:16.169 13:14:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:16.169 13:14:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:16.169 13:14:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:16.169 13:14:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:16.169 13:14:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:16.169 13:14:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:16.169 13:14:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:16.169 13:14:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:16.169 13:14:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:16.169 13:14:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:16.169 13:14:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:16.169 13:14:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:16.169 13:14:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:16.169 13:14:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:16.169 13:14:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:16.169 13:14:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:16.169 { 00:04:16.169 "name": "Malloc2", 00:04:16.169 "aliases": [ 00:04:16.169 "23fd7eef-2757-4766-b7b6-b764c1e51643" 00:04:16.169 ], 00:04:16.169 "product_name": "Malloc disk", 00:04:16.169 "block_size": 512, 00:04:16.169 "num_blocks": 16384, 00:04:16.169 "uuid": "23fd7eef-2757-4766-b7b6-b764c1e51643", 00:04:16.169 "assigned_rate_limits": { 00:04:16.169 "rw_ios_per_sec": 0, 00:04:16.169 "rw_mbytes_per_sec": 0, 00:04:16.169 "r_mbytes_per_sec": 0, 00:04:16.169 "w_mbytes_per_sec": 0 00:04:16.169 }, 00:04:16.169 "claimed": false, 00:04:16.169 "zoned": false, 00:04:16.169 "supported_io_types": { 00:04:16.169 "read": true, 00:04:16.169 "write": true, 00:04:16.169 "unmap": true, 00:04:16.169 "flush": true, 00:04:16.169 "reset": true, 00:04:16.169 "nvme_admin": false, 00:04:16.169 "nvme_io": false, 00:04:16.169 "nvme_io_md": false, 00:04:16.169 "write_zeroes": true, 00:04:16.169 "zcopy": true, 00:04:16.169 "get_zone_info": false, 00:04:16.169 "zone_management": false, 00:04:16.169 "zone_append": false, 00:04:16.169 "compare": false, 00:04:16.169 "compare_and_write": false, 00:04:16.169 "abort": true, 00:04:16.169 "seek_hole": false, 00:04:16.169 "seek_data": false, 00:04:16.169 "copy": true, 00:04:16.169 "nvme_iov_md": false 00:04:16.169 }, 00:04:16.169 "memory_domains": [ 00:04:16.169 { 00:04:16.169 "dma_device_id": "system", 00:04:16.169 "dma_device_type": 1 00:04:16.169 }, 00:04:16.169 { 00:04:16.169 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:16.169 "dma_device_type": 2 00:04:16.169 } 00:04:16.169 ], 00:04:16.169 "driver_specific": {} 00:04:16.169 } 00:04:16.169 ]' 00:04:16.169 13:14:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:16.429 13:14:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:16.429 13:14:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:16.429 13:14:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:16.429 13:14:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:16.429 [2024-11-17 13:14:05.430430] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:16.429 [2024-11-17 13:14:05.430536] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:16.429 [2024-11-17 13:14:05.430561] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:04:16.429 [2024-11-17 13:14:05.430572] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:16.429 [2024-11-17 13:14:05.432868] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:16.429 [2024-11-17 13:14:05.432913] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:16.429 Passthru0 00:04:16.429 13:14:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:16.429 13:14:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:16.429 13:14:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:16.429 13:14:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:16.429 13:14:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:16.429 13:14:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:16.429 { 00:04:16.429 "name": "Malloc2", 00:04:16.429 "aliases": [ 00:04:16.429 "23fd7eef-2757-4766-b7b6-b764c1e51643" 00:04:16.429 ], 00:04:16.429 "product_name": "Malloc disk", 00:04:16.429 "block_size": 512, 00:04:16.429 "num_blocks": 16384, 00:04:16.429 "uuid": "23fd7eef-2757-4766-b7b6-b764c1e51643", 00:04:16.429 "assigned_rate_limits": { 00:04:16.429 "rw_ios_per_sec": 0, 00:04:16.429 "rw_mbytes_per_sec": 0, 00:04:16.429 "r_mbytes_per_sec": 0, 00:04:16.429 "w_mbytes_per_sec": 0 00:04:16.429 }, 00:04:16.429 "claimed": true, 00:04:16.429 "claim_type": "exclusive_write", 00:04:16.429 "zoned": false, 00:04:16.429 "supported_io_types": { 00:04:16.429 "read": true, 00:04:16.429 "write": true, 00:04:16.429 "unmap": true, 00:04:16.429 "flush": true, 00:04:16.429 "reset": true, 00:04:16.429 "nvme_admin": false, 00:04:16.429 "nvme_io": false, 00:04:16.429 "nvme_io_md": false, 00:04:16.429 "write_zeroes": true, 00:04:16.429 "zcopy": true, 00:04:16.429 "get_zone_info": false, 00:04:16.429 "zone_management": false, 00:04:16.429 "zone_append": false, 00:04:16.429 "compare": false, 00:04:16.429 "compare_and_write": false, 00:04:16.429 "abort": true, 00:04:16.429 "seek_hole": false, 00:04:16.429 "seek_data": false, 00:04:16.429 "copy": true, 00:04:16.429 "nvme_iov_md": false 00:04:16.429 }, 00:04:16.429 "memory_domains": [ 00:04:16.429 { 00:04:16.429 "dma_device_id": "system", 00:04:16.429 "dma_device_type": 1 00:04:16.429 }, 00:04:16.429 { 00:04:16.429 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:16.429 "dma_device_type": 2 00:04:16.429 } 00:04:16.429 ], 00:04:16.429 "driver_specific": {} 00:04:16.429 }, 00:04:16.429 { 00:04:16.429 "name": "Passthru0", 00:04:16.429 "aliases": [ 00:04:16.429 "3046ba4b-e9b4-558e-ba16-dcd4028916a9" 00:04:16.429 ], 00:04:16.429 "product_name": "passthru", 00:04:16.429 "block_size": 512, 00:04:16.429 "num_blocks": 16384, 00:04:16.429 "uuid": "3046ba4b-e9b4-558e-ba16-dcd4028916a9", 00:04:16.429 "assigned_rate_limits": { 00:04:16.429 "rw_ios_per_sec": 0, 00:04:16.429 "rw_mbytes_per_sec": 0, 00:04:16.429 "r_mbytes_per_sec": 0, 00:04:16.429 "w_mbytes_per_sec": 0 00:04:16.429 }, 00:04:16.429 "claimed": false, 00:04:16.429 "zoned": false, 00:04:16.429 "supported_io_types": { 00:04:16.429 "read": true, 00:04:16.429 "write": true, 00:04:16.429 "unmap": true, 00:04:16.429 "flush": true, 00:04:16.429 "reset": true, 00:04:16.429 "nvme_admin": false, 00:04:16.429 "nvme_io": false, 00:04:16.429 "nvme_io_md": false, 00:04:16.429 "write_zeroes": true, 00:04:16.429 "zcopy": true, 00:04:16.429 "get_zone_info": false, 00:04:16.429 "zone_management": false, 00:04:16.429 "zone_append": false, 00:04:16.429 "compare": false, 00:04:16.429 "compare_and_write": false, 00:04:16.429 "abort": true, 00:04:16.429 "seek_hole": false, 00:04:16.429 "seek_data": false, 00:04:16.429 "copy": true, 00:04:16.429 "nvme_iov_md": false 00:04:16.429 }, 00:04:16.429 "memory_domains": [ 00:04:16.429 { 00:04:16.429 "dma_device_id": "system", 00:04:16.429 "dma_device_type": 1 00:04:16.429 }, 00:04:16.429 { 00:04:16.429 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:16.429 "dma_device_type": 2 00:04:16.429 } 00:04:16.429 ], 00:04:16.429 "driver_specific": { 00:04:16.429 "passthru": { 00:04:16.429 "name": "Passthru0", 00:04:16.429 "base_bdev_name": "Malloc2" 00:04:16.429 } 00:04:16.429 } 00:04:16.429 } 00:04:16.429 ]' 00:04:16.429 13:14:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:16.429 13:14:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:16.429 13:14:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:16.429 13:14:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:16.429 13:14:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:16.429 13:14:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:16.429 13:14:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:16.429 13:14:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:16.429 13:14:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:16.429 13:14:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:16.429 13:14:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:16.429 13:14:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:16.429 13:14:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:16.430 13:14:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:16.430 13:14:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:16.430 13:14:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:16.430 ************************************ 00:04:16.430 END TEST rpc_daemon_integrity 00:04:16.430 ************************************ 00:04:16.430 13:14:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:16.430 00:04:16.430 real 0m0.365s 00:04:16.430 user 0m0.203s 00:04:16.430 sys 0m0.054s 00:04:16.430 13:14:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:16.430 13:14:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:16.714 13:14:05 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:16.714 13:14:05 rpc -- rpc/rpc.sh@84 -- # killprocess 56917 00:04:16.714 13:14:05 rpc -- common/autotest_common.sh@954 -- # '[' -z 56917 ']' 00:04:16.714 13:14:05 rpc -- common/autotest_common.sh@958 -- # kill -0 56917 00:04:16.714 13:14:05 rpc -- common/autotest_common.sh@959 -- # uname 00:04:16.714 13:14:05 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:16.714 13:14:05 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 56917 00:04:16.714 killing process with pid 56917 00:04:16.714 13:14:05 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:16.714 13:14:05 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:16.714 13:14:05 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 56917' 00:04:16.714 13:14:05 rpc -- common/autotest_common.sh@973 -- # kill 56917 00:04:16.714 13:14:05 rpc -- common/autotest_common.sh@978 -- # wait 56917 00:04:19.253 00:04:19.253 real 0m5.102s 00:04:19.253 user 0m5.587s 00:04:19.253 sys 0m0.893s 00:04:19.253 13:14:07 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:19.253 13:14:07 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:19.253 ************************************ 00:04:19.253 END TEST rpc 00:04:19.253 ************************************ 00:04:19.253 13:14:08 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:19.253 13:14:08 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:19.253 13:14:08 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:19.253 13:14:08 -- common/autotest_common.sh@10 -- # set +x 00:04:19.253 ************************************ 00:04:19.253 START TEST skip_rpc 00:04:19.253 ************************************ 00:04:19.253 13:14:08 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:19.253 * Looking for test storage... 00:04:19.253 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:19.253 13:14:08 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:19.253 13:14:08 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:19.253 13:14:08 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:19.253 13:14:08 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:19.253 13:14:08 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:19.253 13:14:08 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:19.253 13:14:08 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:19.253 13:14:08 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:19.253 13:14:08 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:19.253 13:14:08 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:19.253 13:14:08 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:19.253 13:14:08 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:19.253 13:14:08 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:19.253 13:14:08 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:19.253 13:14:08 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:19.253 13:14:08 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:19.253 13:14:08 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:19.253 13:14:08 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:19.253 13:14:08 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:19.253 13:14:08 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:19.253 13:14:08 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:19.253 13:14:08 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:19.253 13:14:08 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:19.253 13:14:08 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:19.253 13:14:08 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:19.253 13:14:08 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:19.253 13:14:08 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:19.253 13:14:08 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:19.253 13:14:08 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:19.253 13:14:08 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:19.253 13:14:08 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:19.253 13:14:08 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:19.253 13:14:08 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:19.253 13:14:08 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:19.253 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:19.253 --rc genhtml_branch_coverage=1 00:04:19.253 --rc genhtml_function_coverage=1 00:04:19.253 --rc genhtml_legend=1 00:04:19.253 --rc geninfo_all_blocks=1 00:04:19.253 --rc geninfo_unexecuted_blocks=1 00:04:19.253 00:04:19.253 ' 00:04:19.253 13:14:08 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:19.253 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:19.253 --rc genhtml_branch_coverage=1 00:04:19.253 --rc genhtml_function_coverage=1 00:04:19.253 --rc genhtml_legend=1 00:04:19.253 --rc geninfo_all_blocks=1 00:04:19.253 --rc geninfo_unexecuted_blocks=1 00:04:19.253 00:04:19.253 ' 00:04:19.253 13:14:08 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:19.253 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:19.253 --rc genhtml_branch_coverage=1 00:04:19.253 --rc genhtml_function_coverage=1 00:04:19.253 --rc genhtml_legend=1 00:04:19.253 --rc geninfo_all_blocks=1 00:04:19.253 --rc geninfo_unexecuted_blocks=1 00:04:19.253 00:04:19.253 ' 00:04:19.253 13:14:08 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:19.253 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:19.253 --rc genhtml_branch_coverage=1 00:04:19.253 --rc genhtml_function_coverage=1 00:04:19.253 --rc genhtml_legend=1 00:04:19.253 --rc geninfo_all_blocks=1 00:04:19.253 --rc geninfo_unexecuted_blocks=1 00:04:19.253 00:04:19.253 ' 00:04:19.253 13:14:08 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:19.253 13:14:08 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:19.253 13:14:08 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:19.253 13:14:08 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:19.253 13:14:08 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:19.253 13:14:08 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:19.253 ************************************ 00:04:19.253 START TEST skip_rpc 00:04:19.254 ************************************ 00:04:19.254 13:14:08 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:04:19.254 13:14:08 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=57146 00:04:19.254 13:14:08 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:19.254 13:14:08 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:19.254 13:14:08 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:19.254 [2024-11-17 13:14:08.378377] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:04:19.254 [2024-11-17 13:14:08.378521] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57146 ] 00:04:19.514 [2024-11-17 13:14:08.551091] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:19.514 [2024-11-17 13:14:08.666490] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:24.793 13:14:13 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:24.793 13:14:13 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:04:24.793 13:14:13 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:24.793 13:14:13 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:04:24.793 13:14:13 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:24.793 13:14:13 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:04:24.793 13:14:13 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:24.793 13:14:13 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:04:24.793 13:14:13 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:24.793 13:14:13 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:24.793 13:14:13 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:24.793 13:14:13 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:04:24.793 13:14:13 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:24.793 13:14:13 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:24.793 13:14:13 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:24.793 13:14:13 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:24.793 13:14:13 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 57146 00:04:24.793 13:14:13 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 57146 ']' 00:04:24.793 13:14:13 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 57146 00:04:24.793 13:14:13 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:04:24.793 13:14:13 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:24.793 13:14:13 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57146 00:04:24.793 13:14:13 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:24.793 killing process with pid 57146 00:04:24.793 13:14:13 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:24.793 13:14:13 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57146' 00:04:24.793 13:14:13 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 57146 00:04:24.793 13:14:13 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 57146 00:04:26.728 ************************************ 00:04:26.728 END TEST skip_rpc 00:04:26.728 ************************************ 00:04:26.728 00:04:26.728 real 0m7.378s 00:04:26.728 user 0m6.886s 00:04:26.728 sys 0m0.398s 00:04:26.728 13:14:15 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:26.728 13:14:15 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:26.728 13:14:15 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:26.728 13:14:15 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:26.728 13:14:15 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:26.728 13:14:15 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:26.728 ************************************ 00:04:26.728 START TEST skip_rpc_with_json 00:04:26.728 ************************************ 00:04:26.728 13:14:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:04:26.728 13:14:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:26.728 13:14:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57256 00:04:26.728 13:14:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:26.728 13:14:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:26.728 13:14:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57256 00:04:26.728 13:14:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 57256 ']' 00:04:26.728 13:14:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:26.728 13:14:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:26.729 13:14:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:26.729 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:26.729 13:14:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:26.729 13:14:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:26.729 [2024-11-17 13:14:15.875268] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:04:26.729 [2024-11-17 13:14:15.875481] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57256 ] 00:04:26.988 [2024-11-17 13:14:16.046734] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:26.988 [2024-11-17 13:14:16.157444] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:27.928 13:14:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:27.928 13:14:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:04:27.928 13:14:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:27.928 13:14:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:27.928 13:14:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:27.928 [2024-11-17 13:14:16.984811] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:27.928 request: 00:04:27.928 { 00:04:27.928 "trtype": "tcp", 00:04:27.928 "method": "nvmf_get_transports", 00:04:27.928 "req_id": 1 00:04:27.928 } 00:04:27.928 Got JSON-RPC error response 00:04:27.928 response: 00:04:27.928 { 00:04:27.928 "code": -19, 00:04:27.928 "message": "No such device" 00:04:27.928 } 00:04:27.928 13:14:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:27.928 13:14:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:27.928 13:14:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:27.928 13:14:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:27.928 [2024-11-17 13:14:17.000885] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:27.928 13:14:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:27.928 13:14:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:27.928 13:14:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:27.928 13:14:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:28.188 13:14:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:28.188 13:14:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:28.188 { 00:04:28.188 "subsystems": [ 00:04:28.188 { 00:04:28.188 "subsystem": "fsdev", 00:04:28.188 "config": [ 00:04:28.188 { 00:04:28.188 "method": "fsdev_set_opts", 00:04:28.188 "params": { 00:04:28.188 "fsdev_io_pool_size": 65535, 00:04:28.188 "fsdev_io_cache_size": 256 00:04:28.188 } 00:04:28.188 } 00:04:28.188 ] 00:04:28.188 }, 00:04:28.188 { 00:04:28.188 "subsystem": "keyring", 00:04:28.188 "config": [] 00:04:28.188 }, 00:04:28.188 { 00:04:28.188 "subsystem": "iobuf", 00:04:28.188 "config": [ 00:04:28.188 { 00:04:28.188 "method": "iobuf_set_options", 00:04:28.188 "params": { 00:04:28.188 "small_pool_count": 8192, 00:04:28.188 "large_pool_count": 1024, 00:04:28.188 "small_bufsize": 8192, 00:04:28.188 "large_bufsize": 135168, 00:04:28.188 "enable_numa": false 00:04:28.188 } 00:04:28.188 } 00:04:28.188 ] 00:04:28.188 }, 00:04:28.188 { 00:04:28.188 "subsystem": "sock", 00:04:28.188 "config": [ 00:04:28.188 { 00:04:28.188 "method": "sock_set_default_impl", 00:04:28.188 "params": { 00:04:28.188 "impl_name": "posix" 00:04:28.188 } 00:04:28.188 }, 00:04:28.188 { 00:04:28.188 "method": "sock_impl_set_options", 00:04:28.188 "params": { 00:04:28.188 "impl_name": "ssl", 00:04:28.188 "recv_buf_size": 4096, 00:04:28.188 "send_buf_size": 4096, 00:04:28.188 "enable_recv_pipe": true, 00:04:28.188 "enable_quickack": false, 00:04:28.188 "enable_placement_id": 0, 00:04:28.188 "enable_zerocopy_send_server": true, 00:04:28.188 "enable_zerocopy_send_client": false, 00:04:28.188 "zerocopy_threshold": 0, 00:04:28.188 "tls_version": 0, 00:04:28.188 "enable_ktls": false 00:04:28.188 } 00:04:28.188 }, 00:04:28.188 { 00:04:28.188 "method": "sock_impl_set_options", 00:04:28.188 "params": { 00:04:28.188 "impl_name": "posix", 00:04:28.188 "recv_buf_size": 2097152, 00:04:28.188 "send_buf_size": 2097152, 00:04:28.188 "enable_recv_pipe": true, 00:04:28.188 "enable_quickack": false, 00:04:28.188 "enable_placement_id": 0, 00:04:28.188 "enable_zerocopy_send_server": true, 00:04:28.188 "enable_zerocopy_send_client": false, 00:04:28.188 "zerocopy_threshold": 0, 00:04:28.188 "tls_version": 0, 00:04:28.188 "enable_ktls": false 00:04:28.188 } 00:04:28.188 } 00:04:28.188 ] 00:04:28.188 }, 00:04:28.188 { 00:04:28.188 "subsystem": "vmd", 00:04:28.188 "config": [] 00:04:28.188 }, 00:04:28.188 { 00:04:28.188 "subsystem": "accel", 00:04:28.188 "config": [ 00:04:28.188 { 00:04:28.188 "method": "accel_set_options", 00:04:28.188 "params": { 00:04:28.188 "small_cache_size": 128, 00:04:28.188 "large_cache_size": 16, 00:04:28.188 "task_count": 2048, 00:04:28.188 "sequence_count": 2048, 00:04:28.188 "buf_count": 2048 00:04:28.188 } 00:04:28.188 } 00:04:28.188 ] 00:04:28.188 }, 00:04:28.188 { 00:04:28.188 "subsystem": "bdev", 00:04:28.188 "config": [ 00:04:28.188 { 00:04:28.188 "method": "bdev_set_options", 00:04:28.188 "params": { 00:04:28.188 "bdev_io_pool_size": 65535, 00:04:28.188 "bdev_io_cache_size": 256, 00:04:28.188 "bdev_auto_examine": true, 00:04:28.188 "iobuf_small_cache_size": 128, 00:04:28.188 "iobuf_large_cache_size": 16 00:04:28.188 } 00:04:28.188 }, 00:04:28.188 { 00:04:28.188 "method": "bdev_raid_set_options", 00:04:28.188 "params": { 00:04:28.188 "process_window_size_kb": 1024, 00:04:28.188 "process_max_bandwidth_mb_sec": 0 00:04:28.188 } 00:04:28.188 }, 00:04:28.188 { 00:04:28.188 "method": "bdev_iscsi_set_options", 00:04:28.188 "params": { 00:04:28.188 "timeout_sec": 30 00:04:28.188 } 00:04:28.188 }, 00:04:28.188 { 00:04:28.188 "method": "bdev_nvme_set_options", 00:04:28.188 "params": { 00:04:28.188 "action_on_timeout": "none", 00:04:28.188 "timeout_us": 0, 00:04:28.188 "timeout_admin_us": 0, 00:04:28.188 "keep_alive_timeout_ms": 10000, 00:04:28.188 "arbitration_burst": 0, 00:04:28.188 "low_priority_weight": 0, 00:04:28.188 "medium_priority_weight": 0, 00:04:28.188 "high_priority_weight": 0, 00:04:28.188 "nvme_adminq_poll_period_us": 10000, 00:04:28.188 "nvme_ioq_poll_period_us": 0, 00:04:28.188 "io_queue_requests": 0, 00:04:28.188 "delay_cmd_submit": true, 00:04:28.188 "transport_retry_count": 4, 00:04:28.188 "bdev_retry_count": 3, 00:04:28.188 "transport_ack_timeout": 0, 00:04:28.188 "ctrlr_loss_timeout_sec": 0, 00:04:28.188 "reconnect_delay_sec": 0, 00:04:28.188 "fast_io_fail_timeout_sec": 0, 00:04:28.188 "disable_auto_failback": false, 00:04:28.188 "generate_uuids": false, 00:04:28.188 "transport_tos": 0, 00:04:28.188 "nvme_error_stat": false, 00:04:28.188 "rdma_srq_size": 0, 00:04:28.188 "io_path_stat": false, 00:04:28.188 "allow_accel_sequence": false, 00:04:28.188 "rdma_max_cq_size": 0, 00:04:28.188 "rdma_cm_event_timeout_ms": 0, 00:04:28.188 "dhchap_digests": [ 00:04:28.188 "sha256", 00:04:28.188 "sha384", 00:04:28.188 "sha512" 00:04:28.188 ], 00:04:28.188 "dhchap_dhgroups": [ 00:04:28.188 "null", 00:04:28.188 "ffdhe2048", 00:04:28.188 "ffdhe3072", 00:04:28.188 "ffdhe4096", 00:04:28.188 "ffdhe6144", 00:04:28.188 "ffdhe8192" 00:04:28.188 ] 00:04:28.188 } 00:04:28.188 }, 00:04:28.188 { 00:04:28.188 "method": "bdev_nvme_set_hotplug", 00:04:28.188 "params": { 00:04:28.188 "period_us": 100000, 00:04:28.188 "enable": false 00:04:28.188 } 00:04:28.188 }, 00:04:28.188 { 00:04:28.188 "method": "bdev_wait_for_examine" 00:04:28.188 } 00:04:28.188 ] 00:04:28.188 }, 00:04:28.188 { 00:04:28.188 "subsystem": "scsi", 00:04:28.188 "config": null 00:04:28.188 }, 00:04:28.188 { 00:04:28.188 "subsystem": "scheduler", 00:04:28.188 "config": [ 00:04:28.188 { 00:04:28.188 "method": "framework_set_scheduler", 00:04:28.188 "params": { 00:04:28.188 "name": "static" 00:04:28.188 } 00:04:28.188 } 00:04:28.188 ] 00:04:28.188 }, 00:04:28.188 { 00:04:28.188 "subsystem": "vhost_scsi", 00:04:28.188 "config": [] 00:04:28.188 }, 00:04:28.188 { 00:04:28.188 "subsystem": "vhost_blk", 00:04:28.188 "config": [] 00:04:28.188 }, 00:04:28.188 { 00:04:28.188 "subsystem": "ublk", 00:04:28.188 "config": [] 00:04:28.188 }, 00:04:28.188 { 00:04:28.188 "subsystem": "nbd", 00:04:28.188 "config": [] 00:04:28.188 }, 00:04:28.188 { 00:04:28.188 "subsystem": "nvmf", 00:04:28.188 "config": [ 00:04:28.188 { 00:04:28.188 "method": "nvmf_set_config", 00:04:28.188 "params": { 00:04:28.188 "discovery_filter": "match_any", 00:04:28.188 "admin_cmd_passthru": { 00:04:28.188 "identify_ctrlr": false 00:04:28.188 }, 00:04:28.188 "dhchap_digests": [ 00:04:28.188 "sha256", 00:04:28.188 "sha384", 00:04:28.188 "sha512" 00:04:28.188 ], 00:04:28.188 "dhchap_dhgroups": [ 00:04:28.188 "null", 00:04:28.188 "ffdhe2048", 00:04:28.188 "ffdhe3072", 00:04:28.188 "ffdhe4096", 00:04:28.188 "ffdhe6144", 00:04:28.188 "ffdhe8192" 00:04:28.188 ] 00:04:28.188 } 00:04:28.188 }, 00:04:28.188 { 00:04:28.188 "method": "nvmf_set_max_subsystems", 00:04:28.188 "params": { 00:04:28.188 "max_subsystems": 1024 00:04:28.188 } 00:04:28.188 }, 00:04:28.188 { 00:04:28.188 "method": "nvmf_set_crdt", 00:04:28.188 "params": { 00:04:28.188 "crdt1": 0, 00:04:28.188 "crdt2": 0, 00:04:28.188 "crdt3": 0 00:04:28.188 } 00:04:28.188 }, 00:04:28.188 { 00:04:28.188 "method": "nvmf_create_transport", 00:04:28.188 "params": { 00:04:28.188 "trtype": "TCP", 00:04:28.188 "max_queue_depth": 128, 00:04:28.189 "max_io_qpairs_per_ctrlr": 127, 00:04:28.189 "in_capsule_data_size": 4096, 00:04:28.189 "max_io_size": 131072, 00:04:28.189 "io_unit_size": 131072, 00:04:28.189 "max_aq_depth": 128, 00:04:28.189 "num_shared_buffers": 511, 00:04:28.189 "buf_cache_size": 4294967295, 00:04:28.189 "dif_insert_or_strip": false, 00:04:28.189 "zcopy": false, 00:04:28.189 "c2h_success": true, 00:04:28.189 "sock_priority": 0, 00:04:28.189 "abort_timeout_sec": 1, 00:04:28.189 "ack_timeout": 0, 00:04:28.189 "data_wr_pool_size": 0 00:04:28.189 } 00:04:28.189 } 00:04:28.189 ] 00:04:28.189 }, 00:04:28.189 { 00:04:28.189 "subsystem": "iscsi", 00:04:28.189 "config": [ 00:04:28.189 { 00:04:28.189 "method": "iscsi_set_options", 00:04:28.189 "params": { 00:04:28.189 "node_base": "iqn.2016-06.io.spdk", 00:04:28.189 "max_sessions": 128, 00:04:28.189 "max_connections_per_session": 2, 00:04:28.189 "max_queue_depth": 64, 00:04:28.189 "default_time2wait": 2, 00:04:28.189 "default_time2retain": 20, 00:04:28.189 "first_burst_length": 8192, 00:04:28.189 "immediate_data": true, 00:04:28.189 "allow_duplicated_isid": false, 00:04:28.189 "error_recovery_level": 0, 00:04:28.189 "nop_timeout": 60, 00:04:28.189 "nop_in_interval": 30, 00:04:28.189 "disable_chap": false, 00:04:28.189 "require_chap": false, 00:04:28.189 "mutual_chap": false, 00:04:28.189 "chap_group": 0, 00:04:28.189 "max_large_datain_per_connection": 64, 00:04:28.189 "max_r2t_per_connection": 4, 00:04:28.189 "pdu_pool_size": 36864, 00:04:28.189 "immediate_data_pool_size": 16384, 00:04:28.189 "data_out_pool_size": 2048 00:04:28.189 } 00:04:28.189 } 00:04:28.189 ] 00:04:28.189 } 00:04:28.189 ] 00:04:28.189 } 00:04:28.189 13:14:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:28.189 13:14:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57256 00:04:28.189 13:14:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57256 ']' 00:04:28.189 13:14:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57256 00:04:28.189 13:14:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:28.189 13:14:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:28.189 13:14:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57256 00:04:28.189 killing process with pid 57256 00:04:28.189 13:14:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:28.189 13:14:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:28.189 13:14:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57256' 00:04:28.189 13:14:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57256 00:04:28.189 13:14:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57256 00:04:30.727 13:14:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57301 00:04:30.727 13:14:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:30.727 13:14:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:36.010 13:14:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57301 00:04:36.010 13:14:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57301 ']' 00:04:36.010 13:14:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57301 00:04:36.010 13:14:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:36.010 13:14:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:36.010 13:14:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57301 00:04:36.010 killing process with pid 57301 00:04:36.010 13:14:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:36.010 13:14:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:36.010 13:14:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57301' 00:04:36.010 13:14:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57301 00:04:36.010 13:14:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57301 00:04:37.919 13:14:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:37.919 13:14:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:37.919 ************************************ 00:04:37.919 END TEST skip_rpc_with_json 00:04:37.919 ************************************ 00:04:37.919 00:04:37.919 real 0m11.186s 00:04:37.919 user 0m10.632s 00:04:37.919 sys 0m0.859s 00:04:37.919 13:14:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:37.919 13:14:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:37.919 13:14:26 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:37.919 13:14:26 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:37.919 13:14:26 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:37.919 13:14:26 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:37.919 ************************************ 00:04:37.919 START TEST skip_rpc_with_delay 00:04:37.919 ************************************ 00:04:37.919 13:14:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:04:37.919 13:14:26 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:37.919 13:14:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:04:37.919 13:14:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:37.919 13:14:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:37.919 13:14:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:37.919 13:14:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:37.919 13:14:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:37.919 13:14:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:37.919 13:14:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:37.919 13:14:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:37.919 13:14:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:37.919 13:14:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:37.919 [2024-11-17 13:14:27.091716] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:38.179 13:14:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:04:38.179 13:14:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:38.179 13:14:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:38.179 ************************************ 00:04:38.179 END TEST skip_rpc_with_delay 00:04:38.179 ************************************ 00:04:38.179 13:14:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:38.179 00:04:38.179 real 0m0.170s 00:04:38.179 user 0m0.090s 00:04:38.179 sys 0m0.078s 00:04:38.179 13:14:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:38.179 13:14:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:38.179 13:14:27 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:38.179 13:14:27 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:38.179 13:14:27 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:38.179 13:14:27 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:38.179 13:14:27 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:38.179 13:14:27 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:38.179 ************************************ 00:04:38.179 START TEST exit_on_failed_rpc_init 00:04:38.179 ************************************ 00:04:38.179 13:14:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:04:38.179 13:14:27 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57440 00:04:38.179 13:14:27 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:38.179 13:14:27 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57440 00:04:38.179 13:14:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 57440 ']' 00:04:38.179 13:14:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:38.179 13:14:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:38.179 13:14:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:38.179 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:38.179 13:14:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:38.179 13:14:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:38.179 [2024-11-17 13:14:27.321542] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:04:38.179 [2024-11-17 13:14:27.321737] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57440 ] 00:04:38.439 [2024-11-17 13:14:27.496198] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:38.439 [2024-11-17 13:14:27.617961] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:39.377 13:14:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:39.377 13:14:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:04:39.377 13:14:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:39.377 13:14:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:39.377 13:14:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:04:39.377 13:14:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:39.377 13:14:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:39.377 13:14:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:39.377 13:14:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:39.377 13:14:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:39.377 13:14:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:39.377 13:14:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:39.377 13:14:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:39.377 13:14:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:39.377 13:14:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:39.377 [2024-11-17 13:14:28.585677] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:04:39.377 [2024-11-17 13:14:28.586206] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57458 ] 00:04:39.636 [2024-11-17 13:14:28.759620] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:39.897 [2024-11-17 13:14:28.872501] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:39.897 [2024-11-17 13:14:28.872852] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:39.897 [2024-11-17 13:14:28.872910] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:39.897 [2024-11-17 13:14:28.872962] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:40.157 13:14:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:04:40.157 13:14:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:40.157 13:14:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:04:40.157 13:14:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:04:40.157 13:14:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:04:40.157 13:14:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:40.157 13:14:29 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:40.157 13:14:29 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57440 00:04:40.157 13:14:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 57440 ']' 00:04:40.157 13:14:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 57440 00:04:40.157 13:14:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:04:40.157 13:14:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:40.157 13:14:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57440 00:04:40.157 13:14:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:40.157 13:14:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:40.157 13:14:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57440' 00:04:40.157 killing process with pid 57440 00:04:40.157 13:14:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 57440 00:04:40.157 13:14:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 57440 00:04:42.697 00:04:42.697 real 0m4.299s 00:04:42.697 user 0m4.626s 00:04:42.697 sys 0m0.565s 00:04:42.697 13:14:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:42.697 13:14:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:42.697 ************************************ 00:04:42.697 END TEST exit_on_failed_rpc_init 00:04:42.697 ************************************ 00:04:42.697 13:14:31 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:42.697 00:04:42.697 real 0m23.515s 00:04:42.697 user 0m22.437s 00:04:42.697 sys 0m2.196s 00:04:42.697 13:14:31 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:42.697 13:14:31 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:42.697 ************************************ 00:04:42.697 END TEST skip_rpc 00:04:42.697 ************************************ 00:04:42.697 13:14:31 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:42.697 13:14:31 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:42.697 13:14:31 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:42.697 13:14:31 -- common/autotest_common.sh@10 -- # set +x 00:04:42.697 ************************************ 00:04:42.697 START TEST rpc_client 00:04:42.697 ************************************ 00:04:42.697 13:14:31 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:42.697 * Looking for test storage... 00:04:42.697 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:04:42.697 13:14:31 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:42.697 13:14:31 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:04:42.697 13:14:31 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:42.697 13:14:31 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:42.697 13:14:31 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:42.697 13:14:31 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:42.697 13:14:31 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:42.697 13:14:31 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:42.697 13:14:31 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:42.697 13:14:31 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:42.697 13:14:31 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:42.697 13:14:31 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:42.697 13:14:31 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:42.697 13:14:31 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:42.697 13:14:31 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:42.697 13:14:31 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:42.697 13:14:31 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:42.698 13:14:31 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:42.698 13:14:31 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:42.698 13:14:31 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:42.698 13:14:31 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:42.698 13:14:31 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:42.698 13:14:31 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:42.698 13:14:31 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:42.698 13:14:31 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:42.698 13:14:31 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:42.698 13:14:31 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:42.698 13:14:31 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:42.698 13:14:31 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:42.698 13:14:31 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:42.698 13:14:31 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:42.698 13:14:31 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:42.698 13:14:31 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:42.698 13:14:31 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:42.698 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.698 --rc genhtml_branch_coverage=1 00:04:42.698 --rc genhtml_function_coverage=1 00:04:42.698 --rc genhtml_legend=1 00:04:42.698 --rc geninfo_all_blocks=1 00:04:42.698 --rc geninfo_unexecuted_blocks=1 00:04:42.698 00:04:42.698 ' 00:04:42.698 13:14:31 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:42.698 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.698 --rc genhtml_branch_coverage=1 00:04:42.698 --rc genhtml_function_coverage=1 00:04:42.698 --rc genhtml_legend=1 00:04:42.698 --rc geninfo_all_blocks=1 00:04:42.698 --rc geninfo_unexecuted_blocks=1 00:04:42.698 00:04:42.698 ' 00:04:42.698 13:14:31 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:42.698 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.698 --rc genhtml_branch_coverage=1 00:04:42.698 --rc genhtml_function_coverage=1 00:04:42.698 --rc genhtml_legend=1 00:04:42.698 --rc geninfo_all_blocks=1 00:04:42.698 --rc geninfo_unexecuted_blocks=1 00:04:42.698 00:04:42.698 ' 00:04:42.698 13:14:31 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:42.698 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.698 --rc genhtml_branch_coverage=1 00:04:42.698 --rc genhtml_function_coverage=1 00:04:42.698 --rc genhtml_legend=1 00:04:42.698 --rc geninfo_all_blocks=1 00:04:42.698 --rc geninfo_unexecuted_blocks=1 00:04:42.698 00:04:42.698 ' 00:04:42.698 13:14:31 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:04:42.698 OK 00:04:42.957 13:14:31 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:42.957 00:04:42.957 real 0m0.305s 00:04:42.957 user 0m0.175s 00:04:42.957 sys 0m0.144s 00:04:42.957 13:14:31 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:42.957 13:14:31 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:42.957 ************************************ 00:04:42.957 END TEST rpc_client 00:04:42.957 ************************************ 00:04:42.957 13:14:31 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:42.957 13:14:31 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:42.957 13:14:31 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:42.958 13:14:31 -- common/autotest_common.sh@10 -- # set +x 00:04:42.958 ************************************ 00:04:42.958 START TEST json_config 00:04:42.958 ************************************ 00:04:42.958 13:14:31 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:42.958 13:14:32 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:42.958 13:14:32 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:04:42.958 13:14:32 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:42.958 13:14:32 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:42.958 13:14:32 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:42.958 13:14:32 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:42.958 13:14:32 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:42.958 13:14:32 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:42.958 13:14:32 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:42.958 13:14:32 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:42.958 13:14:32 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:42.958 13:14:32 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:42.958 13:14:32 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:42.958 13:14:32 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:42.958 13:14:32 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:42.958 13:14:32 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:42.958 13:14:32 json_config -- scripts/common.sh@345 -- # : 1 00:04:42.958 13:14:32 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:42.958 13:14:32 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:42.958 13:14:32 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:42.958 13:14:32 json_config -- scripts/common.sh@353 -- # local d=1 00:04:42.958 13:14:32 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:42.958 13:14:32 json_config -- scripts/common.sh@355 -- # echo 1 00:04:42.958 13:14:32 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:43.217 13:14:32 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:43.217 13:14:32 json_config -- scripts/common.sh@353 -- # local d=2 00:04:43.217 13:14:32 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:43.217 13:14:32 json_config -- scripts/common.sh@355 -- # echo 2 00:04:43.217 13:14:32 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:43.217 13:14:32 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:43.217 13:14:32 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:43.217 13:14:32 json_config -- scripts/common.sh@368 -- # return 0 00:04:43.217 13:14:32 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:43.217 13:14:32 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:43.217 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.217 --rc genhtml_branch_coverage=1 00:04:43.217 --rc genhtml_function_coverage=1 00:04:43.217 --rc genhtml_legend=1 00:04:43.217 --rc geninfo_all_blocks=1 00:04:43.217 --rc geninfo_unexecuted_blocks=1 00:04:43.217 00:04:43.217 ' 00:04:43.217 13:14:32 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:43.217 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.217 --rc genhtml_branch_coverage=1 00:04:43.217 --rc genhtml_function_coverage=1 00:04:43.217 --rc genhtml_legend=1 00:04:43.217 --rc geninfo_all_blocks=1 00:04:43.217 --rc geninfo_unexecuted_blocks=1 00:04:43.217 00:04:43.217 ' 00:04:43.217 13:14:32 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:43.217 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.217 --rc genhtml_branch_coverage=1 00:04:43.217 --rc genhtml_function_coverage=1 00:04:43.217 --rc genhtml_legend=1 00:04:43.217 --rc geninfo_all_blocks=1 00:04:43.217 --rc geninfo_unexecuted_blocks=1 00:04:43.217 00:04:43.217 ' 00:04:43.217 13:14:32 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:43.217 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.217 --rc genhtml_branch_coverage=1 00:04:43.217 --rc genhtml_function_coverage=1 00:04:43.217 --rc genhtml_legend=1 00:04:43.217 --rc geninfo_all_blocks=1 00:04:43.217 --rc geninfo_unexecuted_blocks=1 00:04:43.217 00:04:43.217 ' 00:04:43.217 13:14:32 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:43.217 13:14:32 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:43.217 13:14:32 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:43.217 13:14:32 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:43.217 13:14:32 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:43.217 13:14:32 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:43.217 13:14:32 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:43.217 13:14:32 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:43.218 13:14:32 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:43.218 13:14:32 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:43.218 13:14:32 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:43.218 13:14:32 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:43.218 13:14:32 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c667c019-11b1-4d83-ab1b-f127f05fffc9 00:04:43.218 13:14:32 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=c667c019-11b1-4d83-ab1b-f127f05fffc9 00:04:43.218 13:14:32 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:43.218 13:14:32 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:43.218 13:14:32 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:43.218 13:14:32 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:43.218 13:14:32 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:43.218 13:14:32 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:43.218 13:14:32 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:43.218 13:14:32 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:43.218 13:14:32 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:43.218 13:14:32 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:43.218 13:14:32 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:43.218 13:14:32 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:43.218 13:14:32 json_config -- paths/export.sh@5 -- # export PATH 00:04:43.218 13:14:32 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:43.218 13:14:32 json_config -- nvmf/common.sh@51 -- # : 0 00:04:43.218 13:14:32 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:43.218 13:14:32 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:43.218 13:14:32 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:43.218 13:14:32 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:43.218 13:14:32 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:43.218 13:14:32 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:43.218 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:43.218 13:14:32 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:43.218 13:14:32 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:43.218 13:14:32 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:43.218 13:14:32 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:43.218 13:14:32 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:43.218 13:14:32 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:43.218 13:14:32 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:43.218 13:14:32 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:43.218 13:14:32 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:04:43.218 WARNING: No tests are enabled so not running JSON configuration tests 00:04:43.218 13:14:32 json_config -- json_config/json_config.sh@28 -- # exit 0 00:04:43.218 00:04:43.218 real 0m0.233s 00:04:43.218 user 0m0.133s 00:04:43.218 sys 0m0.105s 00:04:43.218 13:14:32 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:43.218 13:14:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:43.218 ************************************ 00:04:43.218 END TEST json_config 00:04:43.218 ************************************ 00:04:43.218 13:14:32 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:43.218 13:14:32 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:43.218 13:14:32 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:43.218 13:14:32 -- common/autotest_common.sh@10 -- # set +x 00:04:43.218 ************************************ 00:04:43.218 START TEST json_config_extra_key 00:04:43.218 ************************************ 00:04:43.218 13:14:32 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:43.218 13:14:32 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:43.218 13:14:32 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:04:43.218 13:14:32 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:43.478 13:14:32 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:43.478 13:14:32 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:43.478 13:14:32 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:43.478 13:14:32 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:43.478 13:14:32 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:43.478 13:14:32 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:43.478 13:14:32 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:43.478 13:14:32 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:43.478 13:14:32 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:43.478 13:14:32 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:43.478 13:14:32 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:43.478 13:14:32 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:43.478 13:14:32 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:43.478 13:14:32 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:43.478 13:14:32 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:43.478 13:14:32 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:43.478 13:14:32 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:43.478 13:14:32 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:43.478 13:14:32 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:43.478 13:14:32 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:43.478 13:14:32 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:43.478 13:14:32 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:43.478 13:14:32 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:43.478 13:14:32 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:43.478 13:14:32 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:43.478 13:14:32 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:43.478 13:14:32 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:43.478 13:14:32 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:43.478 13:14:32 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:43.478 13:14:32 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:43.478 13:14:32 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:43.478 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.478 --rc genhtml_branch_coverage=1 00:04:43.478 --rc genhtml_function_coverage=1 00:04:43.478 --rc genhtml_legend=1 00:04:43.478 --rc geninfo_all_blocks=1 00:04:43.478 --rc geninfo_unexecuted_blocks=1 00:04:43.478 00:04:43.478 ' 00:04:43.478 13:14:32 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:43.478 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.478 --rc genhtml_branch_coverage=1 00:04:43.478 --rc genhtml_function_coverage=1 00:04:43.478 --rc genhtml_legend=1 00:04:43.478 --rc geninfo_all_blocks=1 00:04:43.478 --rc geninfo_unexecuted_blocks=1 00:04:43.478 00:04:43.478 ' 00:04:43.478 13:14:32 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:43.478 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.478 --rc genhtml_branch_coverage=1 00:04:43.478 --rc genhtml_function_coverage=1 00:04:43.478 --rc genhtml_legend=1 00:04:43.478 --rc geninfo_all_blocks=1 00:04:43.478 --rc geninfo_unexecuted_blocks=1 00:04:43.478 00:04:43.478 ' 00:04:43.478 13:14:32 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:43.478 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.478 --rc genhtml_branch_coverage=1 00:04:43.478 --rc genhtml_function_coverage=1 00:04:43.478 --rc genhtml_legend=1 00:04:43.478 --rc geninfo_all_blocks=1 00:04:43.478 --rc geninfo_unexecuted_blocks=1 00:04:43.478 00:04:43.478 ' 00:04:43.479 13:14:32 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:43.479 13:14:32 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:43.479 13:14:32 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:43.479 13:14:32 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:43.479 13:14:32 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:43.479 13:14:32 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:43.479 13:14:32 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:43.479 13:14:32 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:43.479 13:14:32 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:43.479 13:14:32 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:43.479 13:14:32 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:43.479 13:14:32 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:43.479 13:14:32 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c667c019-11b1-4d83-ab1b-f127f05fffc9 00:04:43.479 13:14:32 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=c667c019-11b1-4d83-ab1b-f127f05fffc9 00:04:43.479 13:14:32 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:43.479 13:14:32 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:43.479 13:14:32 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:43.479 13:14:32 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:43.479 13:14:32 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:43.479 13:14:32 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:43.479 13:14:32 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:43.479 13:14:32 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:43.479 13:14:32 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:43.479 13:14:32 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:43.479 13:14:32 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:43.479 13:14:32 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:43.479 13:14:32 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:43.479 13:14:32 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:43.479 13:14:32 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:04:43.479 13:14:32 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:43.479 13:14:32 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:43.479 13:14:32 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:43.479 13:14:32 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:43.479 13:14:32 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:43.479 13:14:32 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:43.479 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:43.479 13:14:32 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:43.479 13:14:32 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:43.479 13:14:32 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:43.479 13:14:32 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:43.479 13:14:32 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:43.479 13:14:32 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:43.479 13:14:32 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:43.479 13:14:32 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:43.479 13:14:32 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:43.479 13:14:32 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:43.479 13:14:32 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:04:43.479 13:14:32 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:43.479 13:14:32 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:43.479 13:14:32 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:43.479 INFO: launching applications... 00:04:43.479 13:14:32 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:43.479 13:14:32 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:43.479 13:14:32 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:43.479 13:14:32 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:43.479 13:14:32 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:43.479 13:14:32 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:43.479 13:14:32 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:43.479 13:14:32 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:43.479 13:14:32 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57668 00:04:43.479 13:14:32 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:43.479 Waiting for target to run... 00:04:43.479 13:14:32 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57668 /var/tmp/spdk_tgt.sock 00:04:43.479 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:43.479 13:14:32 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:43.479 13:14:32 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 57668 ']' 00:04:43.479 13:14:32 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:43.479 13:14:32 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:43.479 13:14:32 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:43.479 13:14:32 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:43.479 13:14:32 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:43.479 [2024-11-17 13:14:32.621896] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:04:43.479 [2024-11-17 13:14:32.622012] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57668 ] 00:04:44.049 [2024-11-17 13:14:33.012435] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:44.049 [2024-11-17 13:14:33.115946] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:44.986 00:04:44.986 INFO: shutting down applications... 00:04:44.986 13:14:33 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:44.986 13:14:33 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:04:44.986 13:14:33 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:44.986 13:14:33 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:44.986 13:14:33 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:44.986 13:14:33 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:44.986 13:14:33 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:44.986 13:14:33 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57668 ]] 00:04:44.986 13:14:33 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57668 00:04:44.986 13:14:33 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:44.986 13:14:33 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:44.986 13:14:33 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57668 00:04:44.986 13:14:33 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:45.245 13:14:34 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:45.245 13:14:34 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:45.245 13:14:34 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57668 00:04:45.245 13:14:34 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:45.812 13:14:34 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:45.812 13:14:34 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:45.812 13:14:34 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57668 00:04:45.812 13:14:34 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:46.380 13:14:35 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:46.380 13:14:35 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:46.380 13:14:35 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57668 00:04:46.380 13:14:35 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:46.961 13:14:35 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:46.962 13:14:35 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:46.962 13:14:35 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57668 00:04:46.962 13:14:35 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:47.237 13:14:36 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:47.237 13:14:36 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:47.237 13:14:36 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57668 00:04:47.237 13:14:36 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:47.806 13:14:36 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:47.807 13:14:36 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:47.807 13:14:36 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57668 00:04:47.807 13:14:36 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:47.807 13:14:36 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:47.807 13:14:36 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:47.807 SPDK target shutdown done 00:04:47.807 Success 00:04:47.807 13:14:36 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:47.807 13:14:36 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:47.807 ************************************ 00:04:47.807 END TEST json_config_extra_key 00:04:47.807 ************************************ 00:04:47.807 00:04:47.807 real 0m4.633s 00:04:47.807 user 0m4.194s 00:04:47.807 sys 0m0.563s 00:04:47.807 13:14:36 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:47.807 13:14:36 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:47.807 13:14:36 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:47.807 13:14:36 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:47.807 13:14:36 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:47.807 13:14:36 -- common/autotest_common.sh@10 -- # set +x 00:04:47.807 ************************************ 00:04:47.807 START TEST alias_rpc 00:04:47.807 ************************************ 00:04:47.807 13:14:36 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:48.067 * Looking for test storage... 00:04:48.067 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:04:48.067 13:14:37 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:48.067 13:14:37 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:48.067 13:14:37 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:48.067 13:14:37 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:48.067 13:14:37 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:48.067 13:14:37 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:48.067 13:14:37 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:48.067 13:14:37 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:48.067 13:14:37 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:48.067 13:14:37 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:48.067 13:14:37 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:48.067 13:14:37 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:48.067 13:14:37 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:48.067 13:14:37 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:48.067 13:14:37 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:48.067 13:14:37 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:48.067 13:14:37 alias_rpc -- scripts/common.sh@345 -- # : 1 00:04:48.067 13:14:37 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:48.067 13:14:37 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:48.067 13:14:37 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:48.067 13:14:37 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:04:48.067 13:14:37 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:48.067 13:14:37 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:04:48.067 13:14:37 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:48.067 13:14:37 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:48.067 13:14:37 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:04:48.067 13:14:37 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:48.067 13:14:37 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:04:48.067 13:14:37 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:48.067 13:14:37 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:48.067 13:14:37 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:48.067 13:14:37 alias_rpc -- scripts/common.sh@368 -- # return 0 00:04:48.067 13:14:37 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:48.067 13:14:37 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:48.067 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.067 --rc genhtml_branch_coverage=1 00:04:48.067 --rc genhtml_function_coverage=1 00:04:48.067 --rc genhtml_legend=1 00:04:48.067 --rc geninfo_all_blocks=1 00:04:48.067 --rc geninfo_unexecuted_blocks=1 00:04:48.067 00:04:48.067 ' 00:04:48.067 13:14:37 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:48.067 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.067 --rc genhtml_branch_coverage=1 00:04:48.067 --rc genhtml_function_coverage=1 00:04:48.067 --rc genhtml_legend=1 00:04:48.067 --rc geninfo_all_blocks=1 00:04:48.067 --rc geninfo_unexecuted_blocks=1 00:04:48.067 00:04:48.067 ' 00:04:48.067 13:14:37 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:48.067 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.067 --rc genhtml_branch_coverage=1 00:04:48.067 --rc genhtml_function_coverage=1 00:04:48.067 --rc genhtml_legend=1 00:04:48.067 --rc geninfo_all_blocks=1 00:04:48.067 --rc geninfo_unexecuted_blocks=1 00:04:48.067 00:04:48.067 ' 00:04:48.067 13:14:37 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:48.067 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.067 --rc genhtml_branch_coverage=1 00:04:48.067 --rc genhtml_function_coverage=1 00:04:48.067 --rc genhtml_legend=1 00:04:48.067 --rc geninfo_all_blocks=1 00:04:48.067 --rc geninfo_unexecuted_blocks=1 00:04:48.067 00:04:48.067 ' 00:04:48.067 13:14:37 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:48.067 13:14:37 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57780 00:04:48.067 13:14:37 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:48.067 13:14:37 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57780 00:04:48.067 13:14:37 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 57780 ']' 00:04:48.067 13:14:37 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:48.067 13:14:37 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:48.067 13:14:37 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:48.068 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:48.068 13:14:37 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:48.068 13:14:37 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:48.327 [2024-11-17 13:14:37.318022] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:04:48.327 [2024-11-17 13:14:37.318241] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57780 ] 00:04:48.327 [2024-11-17 13:14:37.491582] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:48.587 [2024-11-17 13:14:37.609587] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:49.524 13:14:38 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:49.524 13:14:38 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:49.524 13:14:38 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:04:49.524 13:14:38 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57780 00:04:49.524 13:14:38 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 57780 ']' 00:04:49.524 13:14:38 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 57780 00:04:49.524 13:14:38 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:04:49.524 13:14:38 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:49.524 13:14:38 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57780 00:04:49.524 killing process with pid 57780 00:04:49.524 13:14:38 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:49.524 13:14:38 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:49.524 13:14:38 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57780' 00:04:49.525 13:14:38 alias_rpc -- common/autotest_common.sh@973 -- # kill 57780 00:04:49.525 13:14:38 alias_rpc -- common/autotest_common.sh@978 -- # wait 57780 00:04:52.063 ************************************ 00:04:52.063 END TEST alias_rpc 00:04:52.063 ************************************ 00:04:52.063 00:04:52.063 real 0m4.182s 00:04:52.063 user 0m4.160s 00:04:52.063 sys 0m0.554s 00:04:52.063 13:14:41 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:52.063 13:14:41 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:52.063 13:14:41 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:04:52.063 13:14:41 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:52.063 13:14:41 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:52.063 13:14:41 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:52.063 13:14:41 -- common/autotest_common.sh@10 -- # set +x 00:04:52.063 ************************************ 00:04:52.063 START TEST spdkcli_tcp 00:04:52.063 ************************************ 00:04:52.063 13:14:41 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:52.324 * Looking for test storage... 00:04:52.324 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:04:52.324 13:14:41 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:52.324 13:14:41 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:04:52.324 13:14:41 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:52.324 13:14:41 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:52.324 13:14:41 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:52.324 13:14:41 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:52.324 13:14:41 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:52.324 13:14:41 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:52.324 13:14:41 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:52.324 13:14:41 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:52.324 13:14:41 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:52.324 13:14:41 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:52.324 13:14:41 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:52.324 13:14:41 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:52.324 13:14:41 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:52.324 13:14:41 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:52.324 13:14:41 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:04:52.324 13:14:41 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:52.324 13:14:41 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:52.324 13:14:41 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:52.324 13:14:41 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:04:52.324 13:14:41 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:52.324 13:14:41 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:04:52.324 13:14:41 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:52.324 13:14:41 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:52.324 13:14:41 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:04:52.324 13:14:41 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:52.324 13:14:41 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:04:52.324 13:14:41 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:52.324 13:14:41 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:52.324 13:14:41 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:52.324 13:14:41 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:04:52.324 13:14:41 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:52.324 13:14:41 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:52.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.324 --rc genhtml_branch_coverage=1 00:04:52.324 --rc genhtml_function_coverage=1 00:04:52.324 --rc genhtml_legend=1 00:04:52.324 --rc geninfo_all_blocks=1 00:04:52.324 --rc geninfo_unexecuted_blocks=1 00:04:52.324 00:04:52.324 ' 00:04:52.324 13:14:41 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:52.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.324 --rc genhtml_branch_coverage=1 00:04:52.324 --rc genhtml_function_coverage=1 00:04:52.324 --rc genhtml_legend=1 00:04:52.324 --rc geninfo_all_blocks=1 00:04:52.324 --rc geninfo_unexecuted_blocks=1 00:04:52.324 00:04:52.324 ' 00:04:52.324 13:14:41 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:52.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.324 --rc genhtml_branch_coverage=1 00:04:52.324 --rc genhtml_function_coverage=1 00:04:52.324 --rc genhtml_legend=1 00:04:52.324 --rc geninfo_all_blocks=1 00:04:52.324 --rc geninfo_unexecuted_blocks=1 00:04:52.324 00:04:52.324 ' 00:04:52.324 13:14:41 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:52.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.324 --rc genhtml_branch_coverage=1 00:04:52.324 --rc genhtml_function_coverage=1 00:04:52.324 --rc genhtml_legend=1 00:04:52.324 --rc geninfo_all_blocks=1 00:04:52.324 --rc geninfo_unexecuted_blocks=1 00:04:52.324 00:04:52.324 ' 00:04:52.324 13:14:41 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:04:52.324 13:14:41 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:04:52.324 13:14:41 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:04:52.324 13:14:41 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:52.324 13:14:41 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:52.324 13:14:41 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:52.324 13:14:41 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:52.324 13:14:41 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:52.324 13:14:41 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:52.324 13:14:41 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:52.324 13:14:41 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=57887 00:04:52.324 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:52.324 13:14:41 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 57887 00:04:52.324 13:14:41 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 57887 ']' 00:04:52.324 13:14:41 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:52.324 13:14:41 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:52.324 13:14:41 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:52.324 13:14:41 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:52.324 13:14:41 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:52.584 [2024-11-17 13:14:41.634800] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:04:52.584 [2024-11-17 13:14:41.635027] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57887 ] 00:04:52.584 [2024-11-17 13:14:41.805076] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:52.844 [2024-11-17 13:14:41.925623] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:52.844 [2024-11-17 13:14:41.925671] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:53.783 13:14:42 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:53.783 13:14:42 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:04:53.783 13:14:42 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=57904 00:04:53.783 13:14:42 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:53.783 13:14:42 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:54.043 [ 00:04:54.043 "bdev_malloc_delete", 00:04:54.043 "bdev_malloc_create", 00:04:54.043 "bdev_null_resize", 00:04:54.043 "bdev_null_delete", 00:04:54.043 "bdev_null_create", 00:04:54.043 "bdev_nvme_cuse_unregister", 00:04:54.043 "bdev_nvme_cuse_register", 00:04:54.043 "bdev_opal_new_user", 00:04:54.043 "bdev_opal_set_lock_state", 00:04:54.043 "bdev_opal_delete", 00:04:54.043 "bdev_opal_get_info", 00:04:54.043 "bdev_opal_create", 00:04:54.043 "bdev_nvme_opal_revert", 00:04:54.043 "bdev_nvme_opal_init", 00:04:54.043 "bdev_nvme_send_cmd", 00:04:54.043 "bdev_nvme_set_keys", 00:04:54.043 "bdev_nvme_get_path_iostat", 00:04:54.043 "bdev_nvme_get_mdns_discovery_info", 00:04:54.043 "bdev_nvme_stop_mdns_discovery", 00:04:54.043 "bdev_nvme_start_mdns_discovery", 00:04:54.043 "bdev_nvme_set_multipath_policy", 00:04:54.043 "bdev_nvme_set_preferred_path", 00:04:54.043 "bdev_nvme_get_io_paths", 00:04:54.043 "bdev_nvme_remove_error_injection", 00:04:54.043 "bdev_nvme_add_error_injection", 00:04:54.043 "bdev_nvme_get_discovery_info", 00:04:54.043 "bdev_nvme_stop_discovery", 00:04:54.043 "bdev_nvme_start_discovery", 00:04:54.043 "bdev_nvme_get_controller_health_info", 00:04:54.043 "bdev_nvme_disable_controller", 00:04:54.043 "bdev_nvme_enable_controller", 00:04:54.043 "bdev_nvme_reset_controller", 00:04:54.043 "bdev_nvme_get_transport_statistics", 00:04:54.043 "bdev_nvme_apply_firmware", 00:04:54.043 "bdev_nvme_detach_controller", 00:04:54.043 "bdev_nvme_get_controllers", 00:04:54.043 "bdev_nvme_attach_controller", 00:04:54.043 "bdev_nvme_set_hotplug", 00:04:54.043 "bdev_nvme_set_options", 00:04:54.043 "bdev_passthru_delete", 00:04:54.043 "bdev_passthru_create", 00:04:54.043 "bdev_lvol_set_parent_bdev", 00:04:54.043 "bdev_lvol_set_parent", 00:04:54.043 "bdev_lvol_check_shallow_copy", 00:04:54.043 "bdev_lvol_start_shallow_copy", 00:04:54.043 "bdev_lvol_grow_lvstore", 00:04:54.043 "bdev_lvol_get_lvols", 00:04:54.043 "bdev_lvol_get_lvstores", 00:04:54.043 "bdev_lvol_delete", 00:04:54.043 "bdev_lvol_set_read_only", 00:04:54.043 "bdev_lvol_resize", 00:04:54.043 "bdev_lvol_decouple_parent", 00:04:54.043 "bdev_lvol_inflate", 00:04:54.043 "bdev_lvol_rename", 00:04:54.043 "bdev_lvol_clone_bdev", 00:04:54.043 "bdev_lvol_clone", 00:04:54.043 "bdev_lvol_snapshot", 00:04:54.043 "bdev_lvol_create", 00:04:54.043 "bdev_lvol_delete_lvstore", 00:04:54.043 "bdev_lvol_rename_lvstore", 00:04:54.043 "bdev_lvol_create_lvstore", 00:04:54.043 "bdev_raid_set_options", 00:04:54.043 "bdev_raid_remove_base_bdev", 00:04:54.043 "bdev_raid_add_base_bdev", 00:04:54.043 "bdev_raid_delete", 00:04:54.043 "bdev_raid_create", 00:04:54.043 "bdev_raid_get_bdevs", 00:04:54.043 "bdev_error_inject_error", 00:04:54.043 "bdev_error_delete", 00:04:54.043 "bdev_error_create", 00:04:54.043 "bdev_split_delete", 00:04:54.043 "bdev_split_create", 00:04:54.043 "bdev_delay_delete", 00:04:54.043 "bdev_delay_create", 00:04:54.043 "bdev_delay_update_latency", 00:04:54.043 "bdev_zone_block_delete", 00:04:54.043 "bdev_zone_block_create", 00:04:54.043 "blobfs_create", 00:04:54.043 "blobfs_detect", 00:04:54.043 "blobfs_set_cache_size", 00:04:54.043 "bdev_aio_delete", 00:04:54.043 "bdev_aio_rescan", 00:04:54.043 "bdev_aio_create", 00:04:54.043 "bdev_ftl_set_property", 00:04:54.043 "bdev_ftl_get_properties", 00:04:54.043 "bdev_ftl_get_stats", 00:04:54.043 "bdev_ftl_unmap", 00:04:54.043 "bdev_ftl_unload", 00:04:54.043 "bdev_ftl_delete", 00:04:54.043 "bdev_ftl_load", 00:04:54.043 "bdev_ftl_create", 00:04:54.043 "bdev_virtio_attach_controller", 00:04:54.043 "bdev_virtio_scsi_get_devices", 00:04:54.043 "bdev_virtio_detach_controller", 00:04:54.043 "bdev_virtio_blk_set_hotplug", 00:04:54.043 "bdev_iscsi_delete", 00:04:54.043 "bdev_iscsi_create", 00:04:54.043 "bdev_iscsi_set_options", 00:04:54.043 "accel_error_inject_error", 00:04:54.043 "ioat_scan_accel_module", 00:04:54.043 "dsa_scan_accel_module", 00:04:54.043 "iaa_scan_accel_module", 00:04:54.043 "keyring_file_remove_key", 00:04:54.043 "keyring_file_add_key", 00:04:54.043 "keyring_linux_set_options", 00:04:54.043 "fsdev_aio_delete", 00:04:54.043 "fsdev_aio_create", 00:04:54.043 "iscsi_get_histogram", 00:04:54.043 "iscsi_enable_histogram", 00:04:54.043 "iscsi_set_options", 00:04:54.043 "iscsi_get_auth_groups", 00:04:54.043 "iscsi_auth_group_remove_secret", 00:04:54.043 "iscsi_auth_group_add_secret", 00:04:54.043 "iscsi_delete_auth_group", 00:04:54.043 "iscsi_create_auth_group", 00:04:54.043 "iscsi_set_discovery_auth", 00:04:54.043 "iscsi_get_options", 00:04:54.043 "iscsi_target_node_request_logout", 00:04:54.043 "iscsi_target_node_set_redirect", 00:04:54.043 "iscsi_target_node_set_auth", 00:04:54.043 "iscsi_target_node_add_lun", 00:04:54.043 "iscsi_get_stats", 00:04:54.043 "iscsi_get_connections", 00:04:54.044 "iscsi_portal_group_set_auth", 00:04:54.044 "iscsi_start_portal_group", 00:04:54.044 "iscsi_delete_portal_group", 00:04:54.044 "iscsi_create_portal_group", 00:04:54.044 "iscsi_get_portal_groups", 00:04:54.044 "iscsi_delete_target_node", 00:04:54.044 "iscsi_target_node_remove_pg_ig_maps", 00:04:54.044 "iscsi_target_node_add_pg_ig_maps", 00:04:54.044 "iscsi_create_target_node", 00:04:54.044 "iscsi_get_target_nodes", 00:04:54.044 "iscsi_delete_initiator_group", 00:04:54.044 "iscsi_initiator_group_remove_initiators", 00:04:54.044 "iscsi_initiator_group_add_initiators", 00:04:54.044 "iscsi_create_initiator_group", 00:04:54.044 "iscsi_get_initiator_groups", 00:04:54.044 "nvmf_set_crdt", 00:04:54.044 "nvmf_set_config", 00:04:54.044 "nvmf_set_max_subsystems", 00:04:54.044 "nvmf_stop_mdns_prr", 00:04:54.044 "nvmf_publish_mdns_prr", 00:04:54.044 "nvmf_subsystem_get_listeners", 00:04:54.044 "nvmf_subsystem_get_qpairs", 00:04:54.044 "nvmf_subsystem_get_controllers", 00:04:54.044 "nvmf_get_stats", 00:04:54.044 "nvmf_get_transports", 00:04:54.044 "nvmf_create_transport", 00:04:54.044 "nvmf_get_targets", 00:04:54.044 "nvmf_delete_target", 00:04:54.044 "nvmf_create_target", 00:04:54.044 "nvmf_subsystem_allow_any_host", 00:04:54.044 "nvmf_subsystem_set_keys", 00:04:54.044 "nvmf_subsystem_remove_host", 00:04:54.044 "nvmf_subsystem_add_host", 00:04:54.044 "nvmf_ns_remove_host", 00:04:54.044 "nvmf_ns_add_host", 00:04:54.044 "nvmf_subsystem_remove_ns", 00:04:54.044 "nvmf_subsystem_set_ns_ana_group", 00:04:54.044 "nvmf_subsystem_add_ns", 00:04:54.044 "nvmf_subsystem_listener_set_ana_state", 00:04:54.044 "nvmf_discovery_get_referrals", 00:04:54.044 "nvmf_discovery_remove_referral", 00:04:54.044 "nvmf_discovery_add_referral", 00:04:54.044 "nvmf_subsystem_remove_listener", 00:04:54.044 "nvmf_subsystem_add_listener", 00:04:54.044 "nvmf_delete_subsystem", 00:04:54.044 "nvmf_create_subsystem", 00:04:54.044 "nvmf_get_subsystems", 00:04:54.044 "env_dpdk_get_mem_stats", 00:04:54.044 "nbd_get_disks", 00:04:54.044 "nbd_stop_disk", 00:04:54.044 "nbd_start_disk", 00:04:54.044 "ublk_recover_disk", 00:04:54.044 "ublk_get_disks", 00:04:54.044 "ublk_stop_disk", 00:04:54.044 "ublk_start_disk", 00:04:54.044 "ublk_destroy_target", 00:04:54.044 "ublk_create_target", 00:04:54.044 "virtio_blk_create_transport", 00:04:54.044 "virtio_blk_get_transports", 00:04:54.044 "vhost_controller_set_coalescing", 00:04:54.044 "vhost_get_controllers", 00:04:54.044 "vhost_delete_controller", 00:04:54.044 "vhost_create_blk_controller", 00:04:54.044 "vhost_scsi_controller_remove_target", 00:04:54.044 "vhost_scsi_controller_add_target", 00:04:54.044 "vhost_start_scsi_controller", 00:04:54.044 "vhost_create_scsi_controller", 00:04:54.044 "thread_set_cpumask", 00:04:54.044 "scheduler_set_options", 00:04:54.044 "framework_get_governor", 00:04:54.044 "framework_get_scheduler", 00:04:54.044 "framework_set_scheduler", 00:04:54.044 "framework_get_reactors", 00:04:54.044 "thread_get_io_channels", 00:04:54.044 "thread_get_pollers", 00:04:54.044 "thread_get_stats", 00:04:54.044 "framework_monitor_context_switch", 00:04:54.044 "spdk_kill_instance", 00:04:54.044 "log_enable_timestamps", 00:04:54.044 "log_get_flags", 00:04:54.044 "log_clear_flag", 00:04:54.044 "log_set_flag", 00:04:54.044 "log_get_level", 00:04:54.044 "log_set_level", 00:04:54.044 "log_get_print_level", 00:04:54.044 "log_set_print_level", 00:04:54.044 "framework_enable_cpumask_locks", 00:04:54.044 "framework_disable_cpumask_locks", 00:04:54.044 "framework_wait_init", 00:04:54.044 "framework_start_init", 00:04:54.044 "scsi_get_devices", 00:04:54.044 "bdev_get_histogram", 00:04:54.044 "bdev_enable_histogram", 00:04:54.044 "bdev_set_qos_limit", 00:04:54.044 "bdev_set_qd_sampling_period", 00:04:54.044 "bdev_get_bdevs", 00:04:54.044 "bdev_reset_iostat", 00:04:54.044 "bdev_get_iostat", 00:04:54.044 "bdev_examine", 00:04:54.044 "bdev_wait_for_examine", 00:04:54.044 "bdev_set_options", 00:04:54.044 "accel_get_stats", 00:04:54.044 "accel_set_options", 00:04:54.044 "accel_set_driver", 00:04:54.044 "accel_crypto_key_destroy", 00:04:54.044 "accel_crypto_keys_get", 00:04:54.044 "accel_crypto_key_create", 00:04:54.044 "accel_assign_opc", 00:04:54.044 "accel_get_module_info", 00:04:54.044 "accel_get_opc_assignments", 00:04:54.044 "vmd_rescan", 00:04:54.044 "vmd_remove_device", 00:04:54.044 "vmd_enable", 00:04:54.044 "sock_get_default_impl", 00:04:54.044 "sock_set_default_impl", 00:04:54.044 "sock_impl_set_options", 00:04:54.044 "sock_impl_get_options", 00:04:54.044 "iobuf_get_stats", 00:04:54.044 "iobuf_set_options", 00:04:54.044 "keyring_get_keys", 00:04:54.044 "framework_get_pci_devices", 00:04:54.044 "framework_get_config", 00:04:54.044 "framework_get_subsystems", 00:04:54.044 "fsdev_set_opts", 00:04:54.044 "fsdev_get_opts", 00:04:54.044 "trace_get_info", 00:04:54.044 "trace_get_tpoint_group_mask", 00:04:54.044 "trace_disable_tpoint_group", 00:04:54.044 "trace_enable_tpoint_group", 00:04:54.044 "trace_clear_tpoint_mask", 00:04:54.044 "trace_set_tpoint_mask", 00:04:54.044 "notify_get_notifications", 00:04:54.044 "notify_get_types", 00:04:54.044 "spdk_get_version", 00:04:54.044 "rpc_get_methods" 00:04:54.044 ] 00:04:54.044 13:14:43 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:54.044 13:14:43 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:54.044 13:14:43 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:54.044 13:14:43 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:54.044 13:14:43 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 57887 00:04:54.044 13:14:43 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 57887 ']' 00:04:54.044 13:14:43 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 57887 00:04:54.044 13:14:43 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:04:54.044 13:14:43 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:54.044 13:14:43 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57887 00:04:54.044 13:14:43 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:54.044 13:14:43 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:54.044 13:14:43 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57887' 00:04:54.044 killing process with pid 57887 00:04:54.044 13:14:43 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 57887 00:04:54.044 13:14:43 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 57887 00:04:56.609 ************************************ 00:04:56.609 END TEST spdkcli_tcp 00:04:56.609 ************************************ 00:04:56.609 00:04:56.609 real 0m4.317s 00:04:56.609 user 0m7.601s 00:04:56.609 sys 0m0.674s 00:04:56.609 13:14:45 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:56.609 13:14:45 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:56.609 13:14:45 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:56.609 13:14:45 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:56.609 13:14:45 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:56.609 13:14:45 -- common/autotest_common.sh@10 -- # set +x 00:04:56.609 ************************************ 00:04:56.609 START TEST dpdk_mem_utility 00:04:56.609 ************************************ 00:04:56.609 13:14:45 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:56.609 * Looking for test storage... 00:04:56.609 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:04:56.609 13:14:45 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:56.609 13:14:45 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:04:56.609 13:14:45 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:56.609 13:14:45 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:56.609 13:14:45 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:56.609 13:14:45 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:56.609 13:14:45 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:56.609 13:14:45 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:04:56.609 13:14:45 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:04:56.609 13:14:45 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:04:56.609 13:14:45 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:04:56.609 13:14:45 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:04:56.609 13:14:45 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:04:56.609 13:14:45 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:04:56.609 13:14:45 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:56.609 13:14:45 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:04:56.609 13:14:45 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:04:56.609 13:14:45 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:56.609 13:14:45 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:56.609 13:14:45 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:04:56.609 13:14:45 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:04:56.609 13:14:45 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:56.609 13:14:45 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:04:56.609 13:14:45 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:04:56.609 13:14:45 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:04:56.610 13:14:45 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:04:56.610 13:14:45 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:56.610 13:14:45 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:04:56.610 13:14:45 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:04:56.610 13:14:45 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:56.878 13:14:45 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:56.878 13:14:45 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:04:56.878 13:14:45 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:56.878 13:14:45 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:56.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.878 --rc genhtml_branch_coverage=1 00:04:56.878 --rc genhtml_function_coverage=1 00:04:56.878 --rc genhtml_legend=1 00:04:56.878 --rc geninfo_all_blocks=1 00:04:56.878 --rc geninfo_unexecuted_blocks=1 00:04:56.878 00:04:56.878 ' 00:04:56.878 13:14:45 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:56.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.878 --rc genhtml_branch_coverage=1 00:04:56.878 --rc genhtml_function_coverage=1 00:04:56.878 --rc genhtml_legend=1 00:04:56.878 --rc geninfo_all_blocks=1 00:04:56.878 --rc geninfo_unexecuted_blocks=1 00:04:56.878 00:04:56.878 ' 00:04:56.878 13:14:45 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:56.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.878 --rc genhtml_branch_coverage=1 00:04:56.878 --rc genhtml_function_coverage=1 00:04:56.878 --rc genhtml_legend=1 00:04:56.878 --rc geninfo_all_blocks=1 00:04:56.878 --rc geninfo_unexecuted_blocks=1 00:04:56.878 00:04:56.878 ' 00:04:56.878 13:14:45 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:56.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.878 --rc genhtml_branch_coverage=1 00:04:56.878 --rc genhtml_function_coverage=1 00:04:56.878 --rc genhtml_legend=1 00:04:56.878 --rc geninfo_all_blocks=1 00:04:56.878 --rc geninfo_unexecuted_blocks=1 00:04:56.878 00:04:56.878 ' 00:04:56.878 13:14:45 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:56.878 13:14:45 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=58009 00:04:56.878 13:14:45 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:56.878 13:14:45 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 58009 00:04:56.878 13:14:45 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 58009 ']' 00:04:56.878 13:14:45 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:56.878 13:14:45 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:56.878 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:56.878 13:14:45 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:56.878 13:14:45 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:56.878 13:14:45 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:56.878 [2024-11-17 13:14:45.928334] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:04:56.878 [2024-11-17 13:14:45.928462] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58009 ] 00:04:57.138 [2024-11-17 13:14:46.102883] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:57.138 [2024-11-17 13:14:46.223127] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.078 13:14:47 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:58.078 13:14:47 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:04:58.078 13:14:47 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:58.078 13:14:47 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:58.078 13:14:47 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:58.078 13:14:47 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:58.078 { 00:04:58.078 "filename": "/tmp/spdk_mem_dump.txt" 00:04:58.078 } 00:04:58.078 13:14:47 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:58.078 13:14:47 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:58.078 DPDK memory size 816.000000 MiB in 1 heap(s) 00:04:58.078 1 heaps totaling size 816.000000 MiB 00:04:58.078 size: 816.000000 MiB heap id: 0 00:04:58.078 end heaps---------- 00:04:58.078 9 mempools totaling size 595.772034 MiB 00:04:58.078 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:58.078 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:58.078 size: 92.545471 MiB name: bdev_io_58009 00:04:58.078 size: 50.003479 MiB name: msgpool_58009 00:04:58.078 size: 36.509338 MiB name: fsdev_io_58009 00:04:58.078 size: 21.763794 MiB name: PDU_Pool 00:04:58.078 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:58.078 size: 4.133484 MiB name: evtpool_58009 00:04:58.078 size: 0.026123 MiB name: Session_Pool 00:04:58.078 end mempools------- 00:04:58.078 6 memzones totaling size 4.142822 MiB 00:04:58.078 size: 1.000366 MiB name: RG_ring_0_58009 00:04:58.078 size: 1.000366 MiB name: RG_ring_1_58009 00:04:58.078 size: 1.000366 MiB name: RG_ring_4_58009 00:04:58.078 size: 1.000366 MiB name: RG_ring_5_58009 00:04:58.078 size: 0.125366 MiB name: RG_ring_2_58009 00:04:58.078 size: 0.015991 MiB name: RG_ring_3_58009 00:04:58.078 end memzones------- 00:04:58.078 13:14:47 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:04:58.078 heap id: 0 total size: 816.000000 MiB number of busy elements: 316 number of free elements: 18 00:04:58.078 list of free elements. size: 16.791138 MiB 00:04:58.078 element at address: 0x200006400000 with size: 1.995972 MiB 00:04:58.078 element at address: 0x20000a600000 with size: 1.995972 MiB 00:04:58.078 element at address: 0x200003e00000 with size: 1.991028 MiB 00:04:58.078 element at address: 0x200018d00040 with size: 0.999939 MiB 00:04:58.078 element at address: 0x200019100040 with size: 0.999939 MiB 00:04:58.078 element at address: 0x200019200000 with size: 0.999084 MiB 00:04:58.078 element at address: 0x200031e00000 with size: 0.994324 MiB 00:04:58.078 element at address: 0x200000400000 with size: 0.992004 MiB 00:04:58.078 element at address: 0x200018a00000 with size: 0.959656 MiB 00:04:58.078 element at address: 0x200019500040 with size: 0.936401 MiB 00:04:58.078 element at address: 0x200000200000 with size: 0.716980 MiB 00:04:58.078 element at address: 0x20001ac00000 with size: 0.561707 MiB 00:04:58.078 element at address: 0x200000c00000 with size: 0.490173 MiB 00:04:58.078 element at address: 0x200018e00000 with size: 0.487976 MiB 00:04:58.078 element at address: 0x200019600000 with size: 0.485413 MiB 00:04:58.078 element at address: 0x200012c00000 with size: 0.443237 MiB 00:04:58.078 element at address: 0x200028000000 with size: 0.390442 MiB 00:04:58.078 element at address: 0x200000800000 with size: 0.350891 MiB 00:04:58.078 list of standard malloc elements. size: 199.287964 MiB 00:04:58.078 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:04:58.078 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:04:58.078 element at address: 0x200018bfff80 with size: 1.000183 MiB 00:04:58.078 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:04:58.078 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:04:58.078 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:04:58.078 element at address: 0x2000195eff40 with size: 0.062683 MiB 00:04:58.078 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:04:58.078 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:04:58.078 element at address: 0x2000195efdc0 with size: 0.000366 MiB 00:04:58.078 element at address: 0x200012bff040 with size: 0.000305 MiB 00:04:58.078 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:04:58.078 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:04:58.078 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:04:58.078 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:04:58.078 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:04:58.078 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:04:58.078 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:04:58.078 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:04:58.078 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:04:58.078 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:04:58.078 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:04:58.078 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:04:58.078 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:04:58.078 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:04:58.078 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:04:58.078 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:04:58.078 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:04:58.078 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:04:58.078 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:04:58.078 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:04:58.078 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:04:58.078 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:04:58.078 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:04:58.078 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:04:58.078 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:04:58.078 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:04:58.078 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:04:58.078 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:04:58.079 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:04:58.079 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:04:58.079 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:04:58.079 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:04:58.079 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:04:58.079 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:04:58.079 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:04:58.079 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:04:58.079 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:04:58.079 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:04:58.079 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:04:58.079 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:04:58.079 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:04:58.079 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:04:58.079 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:04:58.079 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:04:58.079 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:04:58.079 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:04:58.079 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:04:58.079 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:04:58.079 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:04:58.079 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:04:58.079 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:04:58.079 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:04:58.079 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:04:58.079 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:04:58.079 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:04:58.079 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:04:58.079 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:04:58.079 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:04:58.079 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:04:58.079 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:04:58.079 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:04:58.079 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:04:58.079 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:04:58.079 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:04:58.079 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:04:58.079 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:04:58.079 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:04:58.079 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:04:58.079 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:04:58.079 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:04:58.079 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:04:58.079 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:04:58.079 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:04:58.079 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:04:58.079 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:04:58.079 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:04:58.079 element at address: 0x200000cff000 with size: 0.000244 MiB 00:04:58.079 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:04:58.079 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:04:58.079 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:04:58.079 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:04:58.079 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:04:58.079 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:04:58.079 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:04:58.079 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:04:58.079 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:04:58.079 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:04:58.079 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:04:58.079 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:04:58.079 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:04:58.079 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:04:58.079 element at address: 0x200012bff180 with size: 0.000244 MiB 00:04:58.079 element at address: 0x200012bff280 with size: 0.000244 MiB 00:04:58.079 element at address: 0x200012bff380 with size: 0.000244 MiB 00:04:58.079 element at address: 0x200012bff480 with size: 0.000244 MiB 00:04:58.079 element at address: 0x200012bff580 with size: 0.000244 MiB 00:04:58.079 element at address: 0x200012bff680 with size: 0.000244 MiB 00:04:58.079 element at address: 0x200012bff780 with size: 0.000244 MiB 00:04:58.079 element at address: 0x200012bff880 with size: 0.000244 MiB 00:04:58.079 element at address: 0x200012bff980 with size: 0.000244 MiB 00:04:58.079 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:04:58.079 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:04:58.079 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:04:58.079 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:04:58.079 element at address: 0x200012c71780 with size: 0.000244 MiB 00:04:58.079 element at address: 0x200012c71880 with size: 0.000244 MiB 00:04:58.079 element at address: 0x200012c71980 with size: 0.000244 MiB 00:04:58.079 element at address: 0x200012c71a80 with size: 0.000244 MiB 00:04:58.079 element at address: 0x200012c71b80 with size: 0.000244 MiB 00:04:58.079 element at address: 0x200012c71c80 with size: 0.000244 MiB 00:04:58.079 element at address: 0x200012c71d80 with size: 0.000244 MiB 00:04:58.079 element at address: 0x200012c71e80 with size: 0.000244 MiB 00:04:58.079 element at address: 0x200012c71f80 with size: 0.000244 MiB 00:04:58.079 element at address: 0x200012c72080 with size: 0.000244 MiB 00:04:58.079 element at address: 0x200012c72180 with size: 0.000244 MiB 00:04:58.079 element at address: 0x200012cf24c0 with size: 0.000244 MiB 00:04:58.079 element at address: 0x200018afdd00 with size: 0.000244 MiB 00:04:58.079 element at address: 0x200018e7cec0 with size: 0.000244 MiB 00:04:58.079 element at address: 0x200018e7cfc0 with size: 0.000244 MiB 00:04:58.079 element at address: 0x200018e7d0c0 with size: 0.000244 MiB 00:04:58.079 element at address: 0x200018e7d1c0 with size: 0.000244 MiB 00:04:58.079 element at address: 0x200018e7d2c0 with size: 0.000244 MiB 00:04:58.079 element at address: 0x200018e7d3c0 with size: 0.000244 MiB 00:04:58.079 element at address: 0x200018e7d4c0 with size: 0.000244 MiB 00:04:58.079 element at address: 0x200018e7d5c0 with size: 0.000244 MiB 00:04:58.079 element at address: 0x200018e7d6c0 with size: 0.000244 MiB 00:04:58.079 element at address: 0x200018e7d7c0 with size: 0.000244 MiB 00:04:58.079 element at address: 0x200018e7d8c0 with size: 0.000244 MiB 00:04:58.079 element at address: 0x200018e7d9c0 with size: 0.000244 MiB 00:04:58.079 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:04:58.079 element at address: 0x2000192ffc40 with size: 0.000244 MiB 00:04:58.079 element at address: 0x2000195efbc0 with size: 0.000244 MiB 00:04:58.079 element at address: 0x2000195efcc0 with size: 0.000244 MiB 00:04:58.079 element at address: 0x2000196bc680 with size: 0.000244 MiB 00:04:58.079 element at address: 0x20001ac8fcc0 with size: 0.000244 MiB 00:04:58.079 element at address: 0x20001ac8fdc0 with size: 0.000244 MiB 00:04:58.079 element at address: 0x20001ac8fec0 with size: 0.000244 MiB 00:04:58.079 element at address: 0x20001ac8ffc0 with size: 0.000244 MiB 00:04:58.079 element at address: 0x20001ac900c0 with size: 0.000244 MiB 00:04:58.079 element at address: 0x20001ac901c0 with size: 0.000244 MiB 00:04:58.079 element at address: 0x20001ac902c0 with size: 0.000244 MiB 00:04:58.079 element at address: 0x20001ac903c0 with size: 0.000244 MiB 00:04:58.079 element at address: 0x20001ac904c0 with size: 0.000244 MiB 00:04:58.079 element at address: 0x20001ac905c0 with size: 0.000244 MiB 00:04:58.079 element at address: 0x20001ac906c0 with size: 0.000244 MiB 00:04:58.079 element at address: 0x20001ac907c0 with size: 0.000244 MiB 00:04:58.079 element at address: 0x20001ac908c0 with size: 0.000244 MiB 00:04:58.080 element at address: 0x20001ac909c0 with size: 0.000244 MiB 00:04:58.080 element at address: 0x20001ac90ac0 with size: 0.000244 MiB 00:04:58.080 element at address: 0x20001ac90bc0 with size: 0.000244 MiB 00:04:58.080 element at address: 0x20001ac90cc0 with size: 0.000244 MiB 00:04:58.080 element at address: 0x20001ac90dc0 with size: 0.000244 MiB 00:04:58.080 element at address: 0x20001ac90ec0 with size: 0.000244 MiB 00:04:58.080 element at address: 0x20001ac90fc0 with size: 0.000244 MiB 00:04:58.080 element at address: 0x20001ac910c0 with size: 0.000244 MiB 00:04:58.080 element at address: 0x20001ac911c0 with size: 0.000244 MiB 00:04:58.080 element at address: 0x20001ac912c0 with size: 0.000244 MiB 00:04:58.080 element at address: 0x20001ac913c0 with size: 0.000244 MiB 00:04:58.080 element at address: 0x20001ac914c0 with size: 0.000244 MiB 00:04:58.080 element at address: 0x20001ac915c0 with size: 0.000244 MiB 00:04:58.080 element at address: 0x20001ac916c0 with size: 0.000244 MiB 00:04:58.080 element at address: 0x20001ac917c0 with size: 0.000244 MiB 00:04:58.080 element at address: 0x20001ac918c0 with size: 0.000244 MiB 00:04:58.080 element at address: 0x20001ac919c0 with size: 0.000244 MiB 00:04:58.080 element at address: 0x20001ac91ac0 with size: 0.000244 MiB 00:04:58.080 element at address: 0x20001ac91bc0 with size: 0.000244 MiB 00:04:58.080 element at address: 0x20001ac91cc0 with size: 0.000244 MiB 00:04:58.080 element at address: 0x20001ac91dc0 with size: 0.000244 MiB 00:04:58.080 element at address: 0x20001ac91ec0 with size: 0.000244 MiB 00:04:58.080 element at address: 0x20001ac91fc0 with size: 0.000244 MiB 00:04:58.080 element at address: 0x20001ac920c0 with size: 0.000244 MiB 00:04:58.080 element at address: 0x20001ac921c0 with size: 0.000244 MiB 00:04:58.080 element at address: 0x20001ac922c0 with size: 0.000244 MiB 00:04:58.080 element at address: 0x20001ac923c0 with size: 0.000244 MiB 00:04:58.080 element at address: 0x20001ac924c0 with size: 0.000244 MiB 00:04:58.080 element at address: 0x20001ac925c0 with size: 0.000244 MiB 00:04:58.080 element at address: 0x20001ac926c0 with size: 0.000244 MiB 00:04:58.080 element at address: 0x20001ac927c0 with size: 0.000244 MiB 00:04:58.080 element at address: 0x20001ac928c0 with size: 0.000244 MiB 00:04:58.080 element at address: 0x20001ac929c0 with size: 0.000244 MiB 00:04:58.080 element at address: 0x20001ac92ac0 with size: 0.000244 MiB 00:04:58.080 element at address: 0x20001ac92bc0 with size: 0.000244 MiB 00:04:58.080 element at address: 0x20001ac92cc0 with size: 0.000244 MiB 00:04:58.080 element at address: 0x20001ac92dc0 with size: 0.000244 MiB 00:04:58.080 element at address: 0x20001ac92ec0 with size: 0.000244 MiB 00:04:58.080 element at address: 0x20001ac92fc0 with size: 0.000244 MiB 00:04:58.080 element at address: 0x20001ac930c0 with size: 0.000244 MiB 00:04:58.080 element at address: 0x20001ac931c0 with size: 0.000244 MiB 00:04:58.080 element at address: 0x20001ac932c0 with size: 0.000244 MiB 00:04:58.080 element at address: 0x20001ac933c0 with size: 0.000244 MiB 00:04:58.080 element at address: 0x20001ac934c0 with size: 0.000244 MiB 00:04:58.080 element at address: 0x20001ac935c0 with size: 0.000244 MiB 00:04:58.080 element at address: 0x20001ac936c0 with size: 0.000244 MiB 00:04:58.080 element at address: 0x20001ac937c0 with size: 0.000244 MiB 00:04:58.080 element at address: 0x20001ac938c0 with size: 0.000244 MiB 00:04:58.080 element at address: 0x20001ac939c0 with size: 0.000244 MiB 00:04:58.080 element at address: 0x20001ac93ac0 with size: 0.000244 MiB 00:04:58.080 element at address: 0x20001ac93bc0 with size: 0.000244 MiB 00:04:58.080 element at address: 0x20001ac93cc0 with size: 0.000244 MiB 00:04:58.080 element at address: 0x20001ac93dc0 with size: 0.000244 MiB 00:04:58.080 element at address: 0x20001ac93ec0 with size: 0.000244 MiB 00:04:58.080 element at address: 0x20001ac93fc0 with size: 0.000244 MiB 00:04:58.080 element at address: 0x20001ac940c0 with size: 0.000244 MiB 00:04:58.080 element at address: 0x20001ac941c0 with size: 0.000244 MiB 00:04:58.080 element at address: 0x20001ac942c0 with size: 0.000244 MiB 00:04:58.080 element at address: 0x20001ac943c0 with size: 0.000244 MiB 00:04:58.080 element at address: 0x20001ac944c0 with size: 0.000244 MiB 00:04:58.080 element at address: 0x20001ac945c0 with size: 0.000244 MiB 00:04:58.080 element at address: 0x20001ac946c0 with size: 0.000244 MiB 00:04:58.080 element at address: 0x20001ac947c0 with size: 0.000244 MiB 00:04:58.080 element at address: 0x20001ac948c0 with size: 0.000244 MiB 00:04:58.080 element at address: 0x20001ac949c0 with size: 0.000244 MiB 00:04:58.080 element at address: 0x20001ac94ac0 with size: 0.000244 MiB 00:04:58.080 element at address: 0x20001ac94bc0 with size: 0.000244 MiB 00:04:58.080 element at address: 0x20001ac94cc0 with size: 0.000244 MiB 00:04:58.080 element at address: 0x20001ac94dc0 with size: 0.000244 MiB 00:04:58.080 element at address: 0x20001ac94ec0 with size: 0.000244 MiB 00:04:58.080 element at address: 0x20001ac94fc0 with size: 0.000244 MiB 00:04:58.080 element at address: 0x20001ac950c0 with size: 0.000244 MiB 00:04:58.080 element at address: 0x20001ac951c0 with size: 0.000244 MiB 00:04:58.080 element at address: 0x20001ac952c0 with size: 0.000244 MiB 00:04:58.080 element at address: 0x20001ac953c0 with size: 0.000244 MiB 00:04:58.080 element at address: 0x200028063f40 with size: 0.000244 MiB 00:04:58.080 element at address: 0x200028064040 with size: 0.000244 MiB 00:04:58.080 element at address: 0x20002806ad00 with size: 0.000244 MiB 00:04:58.080 element at address: 0x20002806af80 with size: 0.000244 MiB 00:04:58.080 element at address: 0x20002806b080 with size: 0.000244 MiB 00:04:58.080 element at address: 0x20002806b180 with size: 0.000244 MiB 00:04:58.080 element at address: 0x20002806b280 with size: 0.000244 MiB 00:04:58.080 element at address: 0x20002806b380 with size: 0.000244 MiB 00:04:58.080 element at address: 0x20002806b480 with size: 0.000244 MiB 00:04:58.080 element at address: 0x20002806b580 with size: 0.000244 MiB 00:04:58.080 element at address: 0x20002806b680 with size: 0.000244 MiB 00:04:58.080 element at address: 0x20002806b780 with size: 0.000244 MiB 00:04:58.080 element at address: 0x20002806b880 with size: 0.000244 MiB 00:04:58.080 element at address: 0x20002806b980 with size: 0.000244 MiB 00:04:58.080 element at address: 0x20002806ba80 with size: 0.000244 MiB 00:04:58.080 element at address: 0x20002806bb80 with size: 0.000244 MiB 00:04:58.080 element at address: 0x20002806bc80 with size: 0.000244 MiB 00:04:58.080 element at address: 0x20002806bd80 with size: 0.000244 MiB 00:04:58.080 element at address: 0x20002806be80 with size: 0.000244 MiB 00:04:58.080 element at address: 0x20002806bf80 with size: 0.000244 MiB 00:04:58.080 element at address: 0x20002806c080 with size: 0.000244 MiB 00:04:58.080 element at address: 0x20002806c180 with size: 0.000244 MiB 00:04:58.080 element at address: 0x20002806c280 with size: 0.000244 MiB 00:04:58.080 element at address: 0x20002806c380 with size: 0.000244 MiB 00:04:58.080 element at address: 0x20002806c480 with size: 0.000244 MiB 00:04:58.080 element at address: 0x20002806c580 with size: 0.000244 MiB 00:04:58.080 element at address: 0x20002806c680 with size: 0.000244 MiB 00:04:58.080 element at address: 0x20002806c780 with size: 0.000244 MiB 00:04:58.080 element at address: 0x20002806c880 with size: 0.000244 MiB 00:04:58.080 element at address: 0x20002806c980 with size: 0.000244 MiB 00:04:58.080 element at address: 0x20002806ca80 with size: 0.000244 MiB 00:04:58.080 element at address: 0x20002806cb80 with size: 0.000244 MiB 00:04:58.080 element at address: 0x20002806cc80 with size: 0.000244 MiB 00:04:58.080 element at address: 0x20002806cd80 with size: 0.000244 MiB 00:04:58.080 element at address: 0x20002806ce80 with size: 0.000244 MiB 00:04:58.080 element at address: 0x20002806cf80 with size: 0.000244 MiB 00:04:58.080 element at address: 0x20002806d080 with size: 0.000244 MiB 00:04:58.080 element at address: 0x20002806d180 with size: 0.000244 MiB 00:04:58.080 element at address: 0x20002806d280 with size: 0.000244 MiB 00:04:58.080 element at address: 0x20002806d380 with size: 0.000244 MiB 00:04:58.080 element at address: 0x20002806d480 with size: 0.000244 MiB 00:04:58.080 element at address: 0x20002806d580 with size: 0.000244 MiB 00:04:58.080 element at address: 0x20002806d680 with size: 0.000244 MiB 00:04:58.080 element at address: 0x20002806d780 with size: 0.000244 MiB 00:04:58.080 element at address: 0x20002806d880 with size: 0.000244 MiB 00:04:58.081 element at address: 0x20002806d980 with size: 0.000244 MiB 00:04:58.081 element at address: 0x20002806da80 with size: 0.000244 MiB 00:04:58.081 element at address: 0x20002806db80 with size: 0.000244 MiB 00:04:58.081 element at address: 0x20002806dc80 with size: 0.000244 MiB 00:04:58.081 element at address: 0x20002806dd80 with size: 0.000244 MiB 00:04:58.081 element at address: 0x20002806de80 with size: 0.000244 MiB 00:04:58.081 element at address: 0x20002806df80 with size: 0.000244 MiB 00:04:58.081 element at address: 0x20002806e080 with size: 0.000244 MiB 00:04:58.081 element at address: 0x20002806e180 with size: 0.000244 MiB 00:04:58.081 element at address: 0x20002806e280 with size: 0.000244 MiB 00:04:58.081 element at address: 0x20002806e380 with size: 0.000244 MiB 00:04:58.081 element at address: 0x20002806e480 with size: 0.000244 MiB 00:04:58.081 element at address: 0x20002806e580 with size: 0.000244 MiB 00:04:58.081 element at address: 0x20002806e680 with size: 0.000244 MiB 00:04:58.081 element at address: 0x20002806e780 with size: 0.000244 MiB 00:04:58.081 element at address: 0x20002806e880 with size: 0.000244 MiB 00:04:58.081 element at address: 0x20002806e980 with size: 0.000244 MiB 00:04:58.081 element at address: 0x20002806ea80 with size: 0.000244 MiB 00:04:58.081 element at address: 0x20002806eb80 with size: 0.000244 MiB 00:04:58.081 element at address: 0x20002806ec80 with size: 0.000244 MiB 00:04:58.081 element at address: 0x20002806ed80 with size: 0.000244 MiB 00:04:58.081 element at address: 0x20002806ee80 with size: 0.000244 MiB 00:04:58.081 element at address: 0x20002806ef80 with size: 0.000244 MiB 00:04:58.081 element at address: 0x20002806f080 with size: 0.000244 MiB 00:04:58.081 element at address: 0x20002806f180 with size: 0.000244 MiB 00:04:58.081 element at address: 0x20002806f280 with size: 0.000244 MiB 00:04:58.081 element at address: 0x20002806f380 with size: 0.000244 MiB 00:04:58.081 element at address: 0x20002806f480 with size: 0.000244 MiB 00:04:58.081 element at address: 0x20002806f580 with size: 0.000244 MiB 00:04:58.081 element at address: 0x20002806f680 with size: 0.000244 MiB 00:04:58.081 element at address: 0x20002806f780 with size: 0.000244 MiB 00:04:58.081 element at address: 0x20002806f880 with size: 0.000244 MiB 00:04:58.081 element at address: 0x20002806f980 with size: 0.000244 MiB 00:04:58.081 element at address: 0x20002806fa80 with size: 0.000244 MiB 00:04:58.081 element at address: 0x20002806fb80 with size: 0.000244 MiB 00:04:58.081 element at address: 0x20002806fc80 with size: 0.000244 MiB 00:04:58.081 element at address: 0x20002806fd80 with size: 0.000244 MiB 00:04:58.081 element at address: 0x20002806fe80 with size: 0.000244 MiB 00:04:58.081 list of memzone associated elements. size: 599.920898 MiB 00:04:58.081 element at address: 0x20001ac954c0 with size: 211.416809 MiB 00:04:58.081 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:58.081 element at address: 0x20002806ff80 with size: 157.562622 MiB 00:04:58.081 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:58.081 element at address: 0x200012df4740 with size: 92.045105 MiB 00:04:58.081 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_58009_0 00:04:58.081 element at address: 0x200000dff340 with size: 48.003113 MiB 00:04:58.081 associated memzone info: size: 48.002930 MiB name: MP_msgpool_58009_0 00:04:58.081 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:04:58.081 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_58009_0 00:04:58.081 element at address: 0x2000197be900 with size: 20.255615 MiB 00:04:58.081 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:58.081 element at address: 0x200031ffeb00 with size: 18.005127 MiB 00:04:58.081 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:58.081 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:04:58.081 associated memzone info: size: 3.000122 MiB name: MP_evtpool_58009_0 00:04:58.081 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:04:58.081 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_58009 00:04:58.081 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:04:58.081 associated memzone info: size: 1.007996 MiB name: MP_evtpool_58009 00:04:58.081 element at address: 0x200018efde00 with size: 1.008179 MiB 00:04:58.081 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:58.081 element at address: 0x2000196bc780 with size: 1.008179 MiB 00:04:58.081 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:58.081 element at address: 0x200018afde00 with size: 1.008179 MiB 00:04:58.081 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:58.081 element at address: 0x200012cf25c0 with size: 1.008179 MiB 00:04:58.081 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:58.081 element at address: 0x200000cff100 with size: 1.000549 MiB 00:04:58.081 associated memzone info: size: 1.000366 MiB name: RG_ring_0_58009 00:04:58.081 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:04:58.081 associated memzone info: size: 1.000366 MiB name: RG_ring_1_58009 00:04:58.081 element at address: 0x2000192ffd40 with size: 1.000549 MiB 00:04:58.081 associated memzone info: size: 1.000366 MiB name: RG_ring_4_58009 00:04:58.081 element at address: 0x200031efe8c0 with size: 1.000549 MiB 00:04:58.081 associated memzone info: size: 1.000366 MiB name: RG_ring_5_58009 00:04:58.081 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:04:58.081 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_58009 00:04:58.081 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:04:58.081 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_58009 00:04:58.081 element at address: 0x200018e7dac0 with size: 0.500549 MiB 00:04:58.081 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:58.081 element at address: 0x200012c72280 with size: 0.500549 MiB 00:04:58.081 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:58.081 element at address: 0x20001967c440 with size: 0.250549 MiB 00:04:58.081 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:58.081 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:04:58.081 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_58009 00:04:58.081 element at address: 0x20000085df80 with size: 0.125549 MiB 00:04:58.081 associated memzone info: size: 0.125366 MiB name: RG_ring_2_58009 00:04:58.081 element at address: 0x200018af5ac0 with size: 0.031799 MiB 00:04:58.081 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:58.081 element at address: 0x200028064140 with size: 0.023804 MiB 00:04:58.081 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:58.081 element at address: 0x200000859d40 with size: 0.016174 MiB 00:04:58.081 associated memzone info: size: 0.015991 MiB name: RG_ring_3_58009 00:04:58.081 element at address: 0x20002806a2c0 with size: 0.002502 MiB 00:04:58.081 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:58.081 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:04:58.081 associated memzone info: size: 0.000183 MiB name: MP_msgpool_58009 00:04:58.081 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:04:58.081 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_58009 00:04:58.081 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:04:58.081 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_58009 00:04:58.081 element at address: 0x20002806ae00 with size: 0.000366 MiB 00:04:58.081 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:58.081 13:14:47 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:58.081 13:14:47 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 58009 00:04:58.081 13:14:47 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 58009 ']' 00:04:58.081 13:14:47 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 58009 00:04:58.081 13:14:47 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:04:58.081 13:14:47 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:58.081 13:14:47 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58009 00:04:58.081 13:14:47 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:58.081 13:14:47 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:58.081 13:14:47 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58009' 00:04:58.081 killing process with pid 58009 00:04:58.082 13:14:47 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 58009 00:04:58.082 13:14:47 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 58009 00:05:00.620 00:05:00.620 real 0m3.966s 00:05:00.620 user 0m3.879s 00:05:00.620 sys 0m0.558s 00:05:00.620 13:14:49 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:00.620 13:14:49 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:00.620 ************************************ 00:05:00.620 END TEST dpdk_mem_utility 00:05:00.620 ************************************ 00:05:00.620 13:14:49 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:00.620 13:14:49 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:00.620 13:14:49 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:00.620 13:14:49 -- common/autotest_common.sh@10 -- # set +x 00:05:00.620 ************************************ 00:05:00.620 START TEST event 00:05:00.620 ************************************ 00:05:00.620 13:14:49 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:00.620 * Looking for test storage... 00:05:00.620 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:00.620 13:14:49 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:00.620 13:14:49 event -- common/autotest_common.sh@1693 -- # lcov --version 00:05:00.620 13:14:49 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:00.620 13:14:49 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:00.620 13:14:49 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:00.620 13:14:49 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:00.620 13:14:49 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:00.620 13:14:49 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:00.620 13:14:49 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:00.620 13:14:49 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:00.620 13:14:49 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:00.620 13:14:49 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:00.620 13:14:49 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:00.620 13:14:49 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:00.620 13:14:49 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:00.620 13:14:49 event -- scripts/common.sh@344 -- # case "$op" in 00:05:00.620 13:14:49 event -- scripts/common.sh@345 -- # : 1 00:05:00.620 13:14:49 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:00.620 13:14:49 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:00.620 13:14:49 event -- scripts/common.sh@365 -- # decimal 1 00:05:00.620 13:14:49 event -- scripts/common.sh@353 -- # local d=1 00:05:00.620 13:14:49 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:00.880 13:14:49 event -- scripts/common.sh@355 -- # echo 1 00:05:00.880 13:14:49 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:00.880 13:14:49 event -- scripts/common.sh@366 -- # decimal 2 00:05:00.880 13:14:49 event -- scripts/common.sh@353 -- # local d=2 00:05:00.880 13:14:49 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:00.880 13:14:49 event -- scripts/common.sh@355 -- # echo 2 00:05:00.880 13:14:49 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:00.880 13:14:49 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:00.880 13:14:49 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:00.880 13:14:49 event -- scripts/common.sh@368 -- # return 0 00:05:00.880 13:14:49 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:00.880 13:14:49 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:00.880 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.880 --rc genhtml_branch_coverage=1 00:05:00.880 --rc genhtml_function_coverage=1 00:05:00.880 --rc genhtml_legend=1 00:05:00.880 --rc geninfo_all_blocks=1 00:05:00.880 --rc geninfo_unexecuted_blocks=1 00:05:00.880 00:05:00.880 ' 00:05:00.880 13:14:49 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:00.880 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.880 --rc genhtml_branch_coverage=1 00:05:00.880 --rc genhtml_function_coverage=1 00:05:00.880 --rc genhtml_legend=1 00:05:00.880 --rc geninfo_all_blocks=1 00:05:00.880 --rc geninfo_unexecuted_blocks=1 00:05:00.880 00:05:00.880 ' 00:05:00.880 13:14:49 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:00.880 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.880 --rc genhtml_branch_coverage=1 00:05:00.880 --rc genhtml_function_coverage=1 00:05:00.880 --rc genhtml_legend=1 00:05:00.880 --rc geninfo_all_blocks=1 00:05:00.880 --rc geninfo_unexecuted_blocks=1 00:05:00.880 00:05:00.880 ' 00:05:00.880 13:14:49 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:00.880 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.880 --rc genhtml_branch_coverage=1 00:05:00.880 --rc genhtml_function_coverage=1 00:05:00.880 --rc genhtml_legend=1 00:05:00.880 --rc geninfo_all_blocks=1 00:05:00.880 --rc geninfo_unexecuted_blocks=1 00:05:00.880 00:05:00.880 ' 00:05:00.880 13:14:49 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:00.880 13:14:49 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:00.880 13:14:49 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:00.880 13:14:49 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:05:00.880 13:14:49 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:00.880 13:14:49 event -- common/autotest_common.sh@10 -- # set +x 00:05:00.880 ************************************ 00:05:00.880 START TEST event_perf 00:05:00.880 ************************************ 00:05:00.880 13:14:49 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:00.880 Running I/O for 1 seconds...[2024-11-17 13:14:49.913064] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:05:00.880 [2024-11-17 13:14:49.913233] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58117 ] 00:05:00.880 [2024-11-17 13:14:50.087553] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:01.140 [2024-11-17 13:14:50.203858] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:01.140 [2024-11-17 13:14:50.204168] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:01.140 [2024-11-17 13:14:50.204033] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:01.140 Running I/O for 1 seconds...[2024-11-17 13:14:50.204205] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:02.560 00:05:02.560 lcore 0: 209194 00:05:02.560 lcore 1: 209194 00:05:02.560 lcore 2: 209192 00:05:02.560 lcore 3: 209193 00:05:02.560 done. 00:05:02.560 00:05:02.560 real 0m1.578s 00:05:02.561 user 0m4.345s 00:05:02.561 sys 0m0.112s 00:05:02.561 13:14:51 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:02.561 13:14:51 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:02.561 ************************************ 00:05:02.561 END TEST event_perf 00:05:02.561 ************************************ 00:05:02.561 13:14:51 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:02.561 13:14:51 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:02.561 13:14:51 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:02.561 13:14:51 event -- common/autotest_common.sh@10 -- # set +x 00:05:02.561 ************************************ 00:05:02.561 START TEST event_reactor 00:05:02.561 ************************************ 00:05:02.561 13:14:51 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:02.561 [2024-11-17 13:14:51.569025] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:05:02.561 [2024-11-17 13:14:51.569140] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58156 ] 00:05:02.561 [2024-11-17 13:14:51.746387] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:02.820 [2024-11-17 13:14:51.864387] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:04.199 test_start 00:05:04.199 oneshot 00:05:04.199 tick 100 00:05:04.199 tick 100 00:05:04.199 tick 250 00:05:04.199 tick 100 00:05:04.199 tick 100 00:05:04.199 tick 100 00:05:04.199 tick 250 00:05:04.199 tick 500 00:05:04.199 tick 100 00:05:04.199 tick 100 00:05:04.199 tick 250 00:05:04.199 tick 100 00:05:04.199 tick 100 00:05:04.199 test_end 00:05:04.199 ************************************ 00:05:04.199 END TEST event_reactor 00:05:04.199 ************************************ 00:05:04.199 00:05:04.199 real 0m1.570s 00:05:04.199 user 0m1.362s 00:05:04.199 sys 0m0.098s 00:05:04.199 13:14:53 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:04.199 13:14:53 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:04.199 13:14:53 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:04.199 13:14:53 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:04.199 13:14:53 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:04.199 13:14:53 event -- common/autotest_common.sh@10 -- # set +x 00:05:04.199 ************************************ 00:05:04.199 START TEST event_reactor_perf 00:05:04.199 ************************************ 00:05:04.199 13:14:53 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:04.199 [2024-11-17 13:14:53.198335] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:05:04.199 [2024-11-17 13:14:53.198433] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58193 ] 00:05:04.199 [2024-11-17 13:14:53.372799] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:04.459 [2024-11-17 13:14:53.491136] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:05.838 test_start 00:05:05.838 test_end 00:05:05.838 Performance: 388784 events per second 00:05:05.838 00:05:05.838 real 0m1.563s 00:05:05.838 user 0m1.353s 00:05:05.838 sys 0m0.100s 00:05:05.838 ************************************ 00:05:05.838 END TEST event_reactor_perf 00:05:05.838 ************************************ 00:05:05.838 13:14:54 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:05.838 13:14:54 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:05.838 13:14:54 event -- event/event.sh@49 -- # uname -s 00:05:05.838 13:14:54 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:05.838 13:14:54 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:05.838 13:14:54 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:05.838 13:14:54 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:05.838 13:14:54 event -- common/autotest_common.sh@10 -- # set +x 00:05:05.838 ************************************ 00:05:05.838 START TEST event_scheduler 00:05:05.838 ************************************ 00:05:05.838 13:14:54 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:05.838 * Looking for test storage... 00:05:05.838 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:05.838 13:14:54 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:05.838 13:14:54 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:05:05.838 13:14:54 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:05.838 13:14:54 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:05.838 13:14:54 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:05.838 13:14:54 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:05.838 13:14:54 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:05.838 13:14:54 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:05.838 13:14:54 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:05.838 13:14:54 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:05.838 13:14:54 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:05.838 13:14:54 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:05.838 13:14:54 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:05.838 13:14:54 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:05.838 13:14:54 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:05.838 13:14:54 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:05.838 13:14:54 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:05.838 13:14:54 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:05.838 13:14:54 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:05.838 13:14:54 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:05.838 13:14:54 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:05.838 13:14:54 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:05.838 13:14:54 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:05.838 13:14:54 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:05.838 13:14:54 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:05.838 13:14:54 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:05.838 13:14:55 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:05.838 13:14:55 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:05.838 13:14:55 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:05.838 13:14:55 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:05.838 13:14:55 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:05.838 13:14:55 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:05.838 13:14:55 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:05.838 13:14:55 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:05.838 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.838 --rc genhtml_branch_coverage=1 00:05:05.838 --rc genhtml_function_coverage=1 00:05:05.838 --rc genhtml_legend=1 00:05:05.838 --rc geninfo_all_blocks=1 00:05:05.838 --rc geninfo_unexecuted_blocks=1 00:05:05.838 00:05:05.838 ' 00:05:05.838 13:14:55 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:05.839 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.839 --rc genhtml_branch_coverage=1 00:05:05.839 --rc genhtml_function_coverage=1 00:05:05.839 --rc genhtml_legend=1 00:05:05.839 --rc geninfo_all_blocks=1 00:05:05.839 --rc geninfo_unexecuted_blocks=1 00:05:05.839 00:05:05.839 ' 00:05:05.839 13:14:55 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:05.839 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.839 --rc genhtml_branch_coverage=1 00:05:05.839 --rc genhtml_function_coverage=1 00:05:05.839 --rc genhtml_legend=1 00:05:05.839 --rc geninfo_all_blocks=1 00:05:05.839 --rc geninfo_unexecuted_blocks=1 00:05:05.839 00:05:05.839 ' 00:05:05.839 13:14:55 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:05.839 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.839 --rc genhtml_branch_coverage=1 00:05:05.839 --rc genhtml_function_coverage=1 00:05:05.839 --rc genhtml_legend=1 00:05:05.839 --rc geninfo_all_blocks=1 00:05:05.839 --rc geninfo_unexecuted_blocks=1 00:05:05.839 00:05:05.839 ' 00:05:05.839 13:14:55 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:05.839 13:14:55 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58269 00:05:05.839 13:14:55 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:05.839 13:14:55 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:05.839 13:14:55 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58269 00:05:05.839 13:14:55 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 58269 ']' 00:05:05.839 13:14:55 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:05.839 13:14:55 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:05.839 13:14:55 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:05.839 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:05.839 13:14:55 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:05.839 13:14:55 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:06.099 [2024-11-17 13:14:55.095642] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:05:06.099 [2024-11-17 13:14:55.095841] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58269 ] 00:05:06.099 [2024-11-17 13:14:55.268987] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:06.358 [2024-11-17 13:14:55.383688] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.359 [2024-11-17 13:14:55.383994] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:06.359 [2024-11-17 13:14:55.384030] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:06.359 [2024-11-17 13:14:55.383855] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:06.928 13:14:55 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:06.928 13:14:55 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:05:06.928 13:14:55 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:06.928 13:14:55 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:06.928 13:14:55 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:06.928 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:06.928 POWER: Cannot set governor of lcore 0 to userspace 00:05:06.928 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:06.928 POWER: Cannot set governor of lcore 0 to performance 00:05:06.928 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:06.928 POWER: Cannot set governor of lcore 0 to userspace 00:05:06.928 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:06.928 POWER: Cannot set governor of lcore 0 to userspace 00:05:06.928 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:05:06.928 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:06.928 POWER: Unable to set Power Management Environment for lcore 0 00:05:06.928 [2024-11-17 13:14:55.952890] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:05:06.928 [2024-11-17 13:14:55.952932] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:05:06.928 [2024-11-17 13:14:55.952963] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:06.928 [2024-11-17 13:14:55.953006] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:06.928 [2024-11-17 13:14:55.953032] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:06.928 [2024-11-17 13:14:55.953058] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:06.928 13:14:55 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:06.928 13:14:55 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:06.928 13:14:55 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:06.928 13:14:55 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:07.188 [2024-11-17 13:14:56.275440] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:07.188 13:14:56 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:07.188 13:14:56 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:07.188 13:14:56 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:07.188 13:14:56 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:07.188 13:14:56 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:07.188 ************************************ 00:05:07.188 START TEST scheduler_create_thread 00:05:07.188 ************************************ 00:05:07.188 13:14:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:05:07.188 13:14:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:07.188 13:14:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:07.188 13:14:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:07.188 2 00:05:07.188 13:14:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:07.188 13:14:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:07.188 13:14:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:07.188 13:14:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:07.188 3 00:05:07.188 13:14:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:07.188 13:14:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:07.188 13:14:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:07.188 13:14:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:07.188 4 00:05:07.188 13:14:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:07.188 13:14:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:07.188 13:14:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:07.188 13:14:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:07.188 5 00:05:07.188 13:14:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:07.188 13:14:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:07.188 13:14:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:07.188 13:14:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:07.188 6 00:05:07.188 13:14:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:07.188 13:14:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:07.188 13:14:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:07.188 13:14:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:07.188 7 00:05:07.188 13:14:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:07.188 13:14:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:07.188 13:14:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:07.188 13:14:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:07.188 8 00:05:07.188 13:14:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:07.188 13:14:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:07.188 13:14:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:07.188 13:14:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:07.188 9 00:05:07.188 13:14:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:07.188 13:14:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:07.188 13:14:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:07.188 13:14:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:07.188 10 00:05:07.188 13:14:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:07.188 13:14:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:07.188 13:14:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:07.188 13:14:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:08.569 13:14:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:08.569 13:14:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:08.569 13:14:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:08.569 13:14:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:08.569 13:14:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:09.509 13:14:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:09.509 13:14:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:09.509 13:14:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:09.509 13:14:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:10.447 13:14:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:10.447 13:14:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:10.447 13:14:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:10.447 13:14:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:10.447 13:14:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:11.017 13:15:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:11.017 00:05:11.017 real 0m3.885s 00:05:11.017 user 0m0.028s 00:05:11.017 sys 0m0.009s 00:05:11.017 ************************************ 00:05:11.017 END TEST scheduler_create_thread 00:05:11.017 ************************************ 00:05:11.017 13:15:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:11.017 13:15:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:11.017 13:15:00 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:11.017 13:15:00 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58269 00:05:11.017 13:15:00 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 58269 ']' 00:05:11.017 13:15:00 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 58269 00:05:11.017 13:15:00 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:05:11.017 13:15:00 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:11.017 13:15:00 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58269 00:05:11.276 killing process with pid 58269 00:05:11.276 13:15:00 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:11.276 13:15:00 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:11.276 13:15:00 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58269' 00:05:11.276 13:15:00 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 58269 00:05:11.276 13:15:00 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 58269 00:05:11.535 [2024-11-17 13:15:00.552742] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:12.917 ************************************ 00:05:12.917 END TEST event_scheduler 00:05:12.917 ************************************ 00:05:12.917 00:05:12.917 real 0m6.934s 00:05:12.917 user 0m14.427s 00:05:12.917 sys 0m0.492s 00:05:12.917 13:15:01 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:12.917 13:15:01 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:12.917 13:15:01 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:12.917 13:15:01 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:12.917 13:15:01 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:12.917 13:15:01 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:12.917 13:15:01 event -- common/autotest_common.sh@10 -- # set +x 00:05:12.917 ************************************ 00:05:12.917 START TEST app_repeat 00:05:12.917 ************************************ 00:05:12.917 13:15:01 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:05:12.917 13:15:01 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:12.917 13:15:01 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:12.917 13:15:01 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:12.917 13:15:01 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:12.917 13:15:01 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:12.917 13:15:01 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:12.917 13:15:01 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:12.917 13:15:01 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58386 00:05:12.917 13:15:01 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:12.917 13:15:01 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:12.917 13:15:01 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58386' 00:05:12.917 Process app_repeat pid: 58386 00:05:12.918 13:15:01 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:12.918 13:15:01 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:12.918 spdk_app_start Round 0 00:05:12.918 13:15:01 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58386 /var/tmp/spdk-nbd.sock 00:05:12.918 13:15:01 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58386 ']' 00:05:12.918 13:15:01 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:12.918 13:15:01 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:12.918 13:15:01 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:12.918 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:12.918 13:15:01 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:12.918 13:15:01 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:12.918 [2024-11-17 13:15:01.862568] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:05:12.918 [2024-11-17 13:15:01.863312] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58386 ] 00:05:12.918 [2024-11-17 13:15:02.052503] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:13.178 [2024-11-17 13:15:02.172341] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.178 [2024-11-17 13:15:02.172377] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:13.747 13:15:02 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:13.747 13:15:02 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:13.747 13:15:02 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:13.747 Malloc0 00:05:14.007 13:15:02 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:14.266 Malloc1 00:05:14.266 13:15:03 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:14.266 13:15:03 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:14.266 13:15:03 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:14.266 13:15:03 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:14.266 13:15:03 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:14.266 13:15:03 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:14.266 13:15:03 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:14.266 13:15:03 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:14.266 13:15:03 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:14.266 13:15:03 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:14.266 13:15:03 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:14.266 13:15:03 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:14.266 13:15:03 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:14.266 13:15:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:14.266 13:15:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:14.266 13:15:03 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:14.266 /dev/nbd0 00:05:14.266 13:15:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:14.533 13:15:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:14.533 13:15:03 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:14.533 13:15:03 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:14.533 13:15:03 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:14.533 13:15:03 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:14.533 13:15:03 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:14.533 13:15:03 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:14.533 13:15:03 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:14.533 13:15:03 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:14.533 13:15:03 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:14.533 1+0 records in 00:05:14.533 1+0 records out 00:05:14.533 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000553619 s, 7.4 MB/s 00:05:14.533 13:15:03 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:14.533 13:15:03 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:14.533 13:15:03 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:14.533 13:15:03 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:14.533 13:15:03 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:14.533 13:15:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:14.533 13:15:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:14.533 13:15:03 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:14.533 /dev/nbd1 00:05:14.813 13:15:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:14.813 13:15:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:14.813 13:15:03 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:14.813 13:15:03 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:14.813 13:15:03 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:14.813 13:15:03 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:14.813 13:15:03 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:14.813 13:15:03 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:14.813 13:15:03 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:14.813 13:15:03 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:14.813 13:15:03 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:14.813 1+0 records in 00:05:14.813 1+0 records out 00:05:14.813 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000234984 s, 17.4 MB/s 00:05:14.813 13:15:03 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:14.813 13:15:03 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:14.813 13:15:03 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:14.813 13:15:03 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:14.813 13:15:03 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:14.813 13:15:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:14.813 13:15:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:14.813 13:15:03 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:14.813 13:15:03 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:14.813 13:15:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:14.813 13:15:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:14.813 { 00:05:14.813 "nbd_device": "/dev/nbd0", 00:05:14.813 "bdev_name": "Malloc0" 00:05:14.813 }, 00:05:14.813 { 00:05:14.813 "nbd_device": "/dev/nbd1", 00:05:14.813 "bdev_name": "Malloc1" 00:05:14.813 } 00:05:14.813 ]' 00:05:14.813 13:15:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:14.813 { 00:05:14.813 "nbd_device": "/dev/nbd0", 00:05:14.813 "bdev_name": "Malloc0" 00:05:14.813 }, 00:05:14.813 { 00:05:14.813 "nbd_device": "/dev/nbd1", 00:05:14.813 "bdev_name": "Malloc1" 00:05:14.813 } 00:05:14.813 ]' 00:05:14.813 13:15:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:15.073 13:15:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:15.073 /dev/nbd1' 00:05:15.073 13:15:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:15.073 /dev/nbd1' 00:05:15.073 13:15:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:15.073 13:15:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:15.073 13:15:04 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:15.073 13:15:04 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:15.073 13:15:04 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:15.073 13:15:04 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:15.073 13:15:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:15.073 13:15:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:15.073 13:15:04 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:15.073 13:15:04 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:15.073 13:15:04 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:15.073 13:15:04 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:15.073 256+0 records in 00:05:15.073 256+0 records out 00:05:15.073 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0140387 s, 74.7 MB/s 00:05:15.073 13:15:04 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:15.073 13:15:04 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:15.073 256+0 records in 00:05:15.073 256+0 records out 00:05:15.073 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0230016 s, 45.6 MB/s 00:05:15.073 13:15:04 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:15.073 13:15:04 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:15.073 256+0 records in 00:05:15.073 256+0 records out 00:05:15.073 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.028941 s, 36.2 MB/s 00:05:15.073 13:15:04 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:15.073 13:15:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:15.073 13:15:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:15.073 13:15:04 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:15.073 13:15:04 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:15.073 13:15:04 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:15.073 13:15:04 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:15.073 13:15:04 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:15.073 13:15:04 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:15.073 13:15:04 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:15.073 13:15:04 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:15.073 13:15:04 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:15.073 13:15:04 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:15.073 13:15:04 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:15.073 13:15:04 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:15.073 13:15:04 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:15.073 13:15:04 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:15.073 13:15:04 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:15.073 13:15:04 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:15.333 13:15:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:15.333 13:15:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:15.333 13:15:04 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:15.333 13:15:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:15.333 13:15:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:15.333 13:15:04 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:15.333 13:15:04 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:15.333 13:15:04 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:15.333 13:15:04 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:15.334 13:15:04 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:15.592 13:15:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:15.592 13:15:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:15.592 13:15:04 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:15.592 13:15:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:15.592 13:15:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:15.592 13:15:04 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:15.592 13:15:04 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:15.592 13:15:04 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:15.592 13:15:04 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:15.592 13:15:04 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:15.592 13:15:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:15.852 13:15:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:15.852 13:15:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:15.852 13:15:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:15.852 13:15:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:15.852 13:15:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:15.852 13:15:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:15.852 13:15:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:15.852 13:15:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:15.852 13:15:04 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:15.852 13:15:04 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:15.852 13:15:04 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:15.852 13:15:04 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:15.852 13:15:04 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:16.422 13:15:05 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:17.361 [2024-11-17 13:15:06.478295] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:17.621 [2024-11-17 13:15:06.591223] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.621 [2024-11-17 13:15:06.591246] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:17.621 [2024-11-17 13:15:06.783643] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:17.621 [2024-11-17 13:15:06.783708] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:19.528 spdk_app_start Round 1 00:05:19.528 13:15:08 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:19.528 13:15:08 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:19.528 13:15:08 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58386 /var/tmp/spdk-nbd.sock 00:05:19.528 13:15:08 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58386 ']' 00:05:19.528 13:15:08 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:19.528 13:15:08 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:19.528 13:15:08 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:19.528 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:19.528 13:15:08 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:19.528 13:15:08 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:19.528 13:15:08 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:19.528 13:15:08 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:19.528 13:15:08 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:19.789 Malloc0 00:05:19.789 13:15:08 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:20.048 Malloc1 00:05:20.048 13:15:09 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:20.048 13:15:09 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:20.048 13:15:09 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:20.048 13:15:09 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:20.048 13:15:09 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:20.048 13:15:09 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:20.048 13:15:09 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:20.048 13:15:09 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:20.048 13:15:09 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:20.048 13:15:09 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:20.048 13:15:09 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:20.048 13:15:09 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:20.048 13:15:09 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:20.048 13:15:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:20.048 13:15:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:20.048 13:15:09 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:20.309 /dev/nbd0 00:05:20.309 13:15:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:20.309 13:15:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:20.309 13:15:09 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:20.309 13:15:09 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:20.309 13:15:09 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:20.309 13:15:09 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:20.309 13:15:09 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:20.309 13:15:09 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:20.309 13:15:09 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:20.309 13:15:09 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:20.309 13:15:09 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:20.309 1+0 records in 00:05:20.309 1+0 records out 00:05:20.309 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000338878 s, 12.1 MB/s 00:05:20.309 13:15:09 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:20.309 13:15:09 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:20.309 13:15:09 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:20.309 13:15:09 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:20.309 13:15:09 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:20.309 13:15:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:20.309 13:15:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:20.309 13:15:09 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:20.569 /dev/nbd1 00:05:20.569 13:15:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:20.569 13:15:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:20.569 13:15:09 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:20.569 13:15:09 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:20.569 13:15:09 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:20.569 13:15:09 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:20.569 13:15:09 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:20.569 13:15:09 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:20.569 13:15:09 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:20.569 13:15:09 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:20.569 13:15:09 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:20.569 1+0 records in 00:05:20.569 1+0 records out 00:05:20.569 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000357589 s, 11.5 MB/s 00:05:20.569 13:15:09 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:20.569 13:15:09 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:20.569 13:15:09 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:20.569 13:15:09 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:20.569 13:15:09 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:20.569 13:15:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:20.569 13:15:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:20.569 13:15:09 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:20.569 13:15:09 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:20.569 13:15:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:20.830 13:15:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:20.830 { 00:05:20.830 "nbd_device": "/dev/nbd0", 00:05:20.830 "bdev_name": "Malloc0" 00:05:20.830 }, 00:05:20.830 { 00:05:20.830 "nbd_device": "/dev/nbd1", 00:05:20.830 "bdev_name": "Malloc1" 00:05:20.830 } 00:05:20.830 ]' 00:05:20.830 13:15:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:20.830 { 00:05:20.830 "nbd_device": "/dev/nbd0", 00:05:20.830 "bdev_name": "Malloc0" 00:05:20.830 }, 00:05:20.830 { 00:05:20.830 "nbd_device": "/dev/nbd1", 00:05:20.830 "bdev_name": "Malloc1" 00:05:20.830 } 00:05:20.830 ]' 00:05:20.830 13:15:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:20.830 13:15:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:20.830 /dev/nbd1' 00:05:20.830 13:15:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:20.830 13:15:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:20.830 /dev/nbd1' 00:05:20.830 13:15:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:20.830 13:15:09 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:20.830 13:15:09 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:20.830 13:15:09 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:20.830 13:15:09 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:20.830 13:15:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:20.830 13:15:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:20.830 13:15:09 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:20.830 13:15:09 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:20.830 13:15:09 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:20.830 13:15:09 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:20.830 256+0 records in 00:05:20.830 256+0 records out 00:05:20.830 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0137093 s, 76.5 MB/s 00:05:20.830 13:15:09 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:20.830 13:15:09 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:20.830 256+0 records in 00:05:20.830 256+0 records out 00:05:20.830 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0251585 s, 41.7 MB/s 00:05:20.830 13:15:09 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:20.830 13:15:09 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:20.830 256+0 records in 00:05:20.830 256+0 records out 00:05:20.830 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0249095 s, 42.1 MB/s 00:05:20.830 13:15:09 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:20.830 13:15:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:20.830 13:15:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:20.830 13:15:09 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:20.830 13:15:09 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:20.830 13:15:09 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:20.830 13:15:09 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:20.830 13:15:09 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:20.830 13:15:09 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:20.830 13:15:09 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:20.830 13:15:09 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:20.830 13:15:09 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:20.830 13:15:09 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:20.830 13:15:09 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:20.830 13:15:09 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:20.830 13:15:09 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:20.830 13:15:09 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:20.830 13:15:09 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:20.830 13:15:09 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:21.090 13:15:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:21.090 13:15:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:21.090 13:15:10 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:21.090 13:15:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:21.090 13:15:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:21.090 13:15:10 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:21.090 13:15:10 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:21.090 13:15:10 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:21.090 13:15:10 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:21.090 13:15:10 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:21.350 13:15:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:21.350 13:15:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:21.350 13:15:10 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:21.350 13:15:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:21.350 13:15:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:21.350 13:15:10 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:21.350 13:15:10 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:21.350 13:15:10 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:21.350 13:15:10 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:21.350 13:15:10 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:21.350 13:15:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:21.610 13:15:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:21.610 13:15:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:21.610 13:15:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:21.610 13:15:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:21.611 13:15:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:21.611 13:15:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:21.611 13:15:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:21.611 13:15:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:21.611 13:15:10 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:21.611 13:15:10 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:21.611 13:15:10 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:21.611 13:15:10 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:21.611 13:15:10 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:21.871 13:15:11 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:23.253 [2024-11-17 13:15:12.210488] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:23.253 [2024-11-17 13:15:12.325793] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.253 [2024-11-17 13:15:12.325818] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:23.513 [2024-11-17 13:15:12.523352] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:23.513 [2024-11-17 13:15:12.523421] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:24.894 13:15:14 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:24.894 spdk_app_start Round 2 00:05:24.894 13:15:14 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:24.894 13:15:14 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58386 /var/tmp/spdk-nbd.sock 00:05:24.894 13:15:14 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58386 ']' 00:05:24.894 13:15:14 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:24.894 13:15:14 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:24.894 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:24.894 13:15:14 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:24.894 13:15:14 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:24.894 13:15:14 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:25.154 13:15:14 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:25.154 13:15:14 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:25.154 13:15:14 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:25.414 Malloc0 00:05:25.414 13:15:14 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:25.674 Malloc1 00:05:25.674 13:15:14 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:25.674 13:15:14 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:25.674 13:15:14 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:25.674 13:15:14 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:25.674 13:15:14 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:25.674 13:15:14 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:25.674 13:15:14 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:25.674 13:15:14 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:25.674 13:15:14 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:25.674 13:15:14 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:25.674 13:15:14 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:25.674 13:15:14 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:25.674 13:15:14 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:25.674 13:15:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:25.674 13:15:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:25.674 13:15:14 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:25.937 /dev/nbd0 00:05:25.937 13:15:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:25.937 13:15:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:25.937 13:15:15 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:25.937 13:15:15 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:25.937 13:15:15 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:25.937 13:15:15 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:25.937 13:15:15 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:25.937 13:15:15 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:25.937 13:15:15 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:25.937 13:15:15 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:25.937 13:15:15 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:25.937 1+0 records in 00:05:25.937 1+0 records out 00:05:25.937 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000464841 s, 8.8 MB/s 00:05:25.937 13:15:15 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:25.937 13:15:15 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:25.937 13:15:15 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:25.937 13:15:15 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:25.937 13:15:15 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:25.937 13:15:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:25.937 13:15:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:25.937 13:15:15 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:26.201 /dev/nbd1 00:05:26.201 13:15:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:26.201 13:15:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:26.201 13:15:15 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:26.201 13:15:15 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:26.201 13:15:15 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:26.201 13:15:15 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:26.201 13:15:15 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:26.201 13:15:15 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:26.201 13:15:15 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:26.201 13:15:15 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:26.201 13:15:15 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:26.201 1+0 records in 00:05:26.201 1+0 records out 00:05:26.201 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000352773 s, 11.6 MB/s 00:05:26.201 13:15:15 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:26.201 13:15:15 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:26.201 13:15:15 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:26.201 13:15:15 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:26.201 13:15:15 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:26.201 13:15:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:26.201 13:15:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:26.201 13:15:15 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:26.201 13:15:15 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:26.201 13:15:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:26.461 13:15:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:26.461 { 00:05:26.461 "nbd_device": "/dev/nbd0", 00:05:26.461 "bdev_name": "Malloc0" 00:05:26.461 }, 00:05:26.461 { 00:05:26.461 "nbd_device": "/dev/nbd1", 00:05:26.461 "bdev_name": "Malloc1" 00:05:26.461 } 00:05:26.461 ]' 00:05:26.461 13:15:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:26.461 { 00:05:26.461 "nbd_device": "/dev/nbd0", 00:05:26.461 "bdev_name": "Malloc0" 00:05:26.461 }, 00:05:26.461 { 00:05:26.461 "nbd_device": "/dev/nbd1", 00:05:26.461 "bdev_name": "Malloc1" 00:05:26.461 } 00:05:26.461 ]' 00:05:26.461 13:15:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:26.461 13:15:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:26.461 /dev/nbd1' 00:05:26.461 13:15:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:26.461 /dev/nbd1' 00:05:26.461 13:15:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:26.461 13:15:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:26.461 13:15:15 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:26.461 13:15:15 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:26.461 13:15:15 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:26.461 13:15:15 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:26.461 13:15:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:26.461 13:15:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:26.461 13:15:15 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:26.461 13:15:15 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:26.461 13:15:15 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:26.461 13:15:15 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:26.461 256+0 records in 00:05:26.461 256+0 records out 00:05:26.461 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0139368 s, 75.2 MB/s 00:05:26.461 13:15:15 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:26.461 13:15:15 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:26.461 256+0 records in 00:05:26.461 256+0 records out 00:05:26.461 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0223222 s, 47.0 MB/s 00:05:26.461 13:15:15 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:26.461 13:15:15 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:26.461 256+0 records in 00:05:26.461 256+0 records out 00:05:26.461 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0252316 s, 41.6 MB/s 00:05:26.461 13:15:15 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:26.461 13:15:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:26.461 13:15:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:26.461 13:15:15 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:26.461 13:15:15 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:26.461 13:15:15 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:26.461 13:15:15 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:26.461 13:15:15 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:26.461 13:15:15 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:26.461 13:15:15 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:26.461 13:15:15 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:26.461 13:15:15 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:26.720 13:15:15 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:26.720 13:15:15 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:26.720 13:15:15 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:26.720 13:15:15 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:26.720 13:15:15 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:26.720 13:15:15 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:26.720 13:15:15 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:26.720 13:15:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:26.720 13:15:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:26.720 13:15:15 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:26.720 13:15:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:26.720 13:15:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:26.720 13:15:15 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:26.720 13:15:15 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:26.720 13:15:15 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:26.720 13:15:15 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:26.720 13:15:15 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:26.979 13:15:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:26.979 13:15:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:26.979 13:15:16 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:26.979 13:15:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:26.979 13:15:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:26.979 13:15:16 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:26.979 13:15:16 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:26.979 13:15:16 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:26.979 13:15:16 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:26.979 13:15:16 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:26.979 13:15:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:27.238 13:15:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:27.238 13:15:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:27.238 13:15:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:27.238 13:15:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:27.238 13:15:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:27.238 13:15:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:27.238 13:15:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:27.238 13:15:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:27.238 13:15:16 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:27.238 13:15:16 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:27.238 13:15:16 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:27.238 13:15:16 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:27.238 13:15:16 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:27.808 13:15:16 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:28.747 [2024-11-17 13:15:17.946632] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:29.007 [2024-11-17 13:15:18.062753] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.007 [2024-11-17 13:15:18.062755] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:29.267 [2024-11-17 13:15:18.256329] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:29.267 [2024-11-17 13:15:18.256415] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:30.653 13:15:19 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58386 /var/tmp/spdk-nbd.sock 00:05:30.653 13:15:19 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58386 ']' 00:05:30.653 13:15:19 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:30.653 13:15:19 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:30.653 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:30.653 13:15:19 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:30.653 13:15:19 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:30.653 13:15:19 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:30.912 13:15:20 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:30.912 13:15:20 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:30.912 13:15:20 event.app_repeat -- event/event.sh@39 -- # killprocess 58386 00:05:30.912 13:15:20 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 58386 ']' 00:05:30.912 13:15:20 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 58386 00:05:30.912 13:15:20 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:05:30.912 13:15:20 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:30.912 13:15:20 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58386 00:05:30.912 13:15:20 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:30.912 13:15:20 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:30.912 killing process with pid 58386 00:05:30.912 13:15:20 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58386' 00:05:30.912 13:15:20 event.app_repeat -- common/autotest_common.sh@973 -- # kill 58386 00:05:30.912 13:15:20 event.app_repeat -- common/autotest_common.sh@978 -- # wait 58386 00:05:31.892 spdk_app_start is called in Round 0. 00:05:31.892 Shutdown signal received, stop current app iteration 00:05:31.892 Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 reinitialization... 00:05:31.892 spdk_app_start is called in Round 1. 00:05:31.892 Shutdown signal received, stop current app iteration 00:05:31.892 Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 reinitialization... 00:05:31.892 spdk_app_start is called in Round 2. 00:05:31.892 Shutdown signal received, stop current app iteration 00:05:31.892 Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 reinitialization... 00:05:31.892 spdk_app_start is called in Round 3. 00:05:31.892 Shutdown signal received, stop current app iteration 00:05:31.892 13:15:21 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:31.892 13:15:21 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:31.892 00:05:31.892 real 0m19.309s 00:05:31.892 user 0m41.295s 00:05:31.892 sys 0m2.764s 00:05:31.892 13:15:21 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:31.892 13:15:21 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:31.892 ************************************ 00:05:31.892 END TEST app_repeat 00:05:31.892 ************************************ 00:05:32.151 13:15:21 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:32.151 13:15:21 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:32.151 13:15:21 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:32.151 13:15:21 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:32.151 13:15:21 event -- common/autotest_common.sh@10 -- # set +x 00:05:32.151 ************************************ 00:05:32.151 START TEST cpu_locks 00:05:32.151 ************************************ 00:05:32.151 13:15:21 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:32.151 * Looking for test storage... 00:05:32.151 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:32.151 13:15:21 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:32.151 13:15:21 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:05:32.151 13:15:21 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:32.151 13:15:21 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:32.151 13:15:21 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:32.151 13:15:21 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:32.151 13:15:21 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:32.151 13:15:21 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:32.151 13:15:21 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:32.151 13:15:21 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:32.151 13:15:21 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:32.151 13:15:21 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:32.151 13:15:21 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:32.151 13:15:21 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:32.151 13:15:21 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:32.151 13:15:21 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:32.151 13:15:21 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:32.151 13:15:21 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:32.151 13:15:21 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:32.151 13:15:21 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:32.410 13:15:21 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:32.410 13:15:21 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:32.410 13:15:21 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:32.410 13:15:21 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:32.410 13:15:21 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:32.410 13:15:21 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:32.410 13:15:21 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:32.410 13:15:21 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:32.410 13:15:21 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:32.410 13:15:21 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:32.410 13:15:21 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:32.410 13:15:21 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:32.410 13:15:21 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:32.410 13:15:21 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:32.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:32.410 --rc genhtml_branch_coverage=1 00:05:32.410 --rc genhtml_function_coverage=1 00:05:32.410 --rc genhtml_legend=1 00:05:32.410 --rc geninfo_all_blocks=1 00:05:32.410 --rc geninfo_unexecuted_blocks=1 00:05:32.410 00:05:32.410 ' 00:05:32.410 13:15:21 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:32.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:32.410 --rc genhtml_branch_coverage=1 00:05:32.410 --rc genhtml_function_coverage=1 00:05:32.410 --rc genhtml_legend=1 00:05:32.410 --rc geninfo_all_blocks=1 00:05:32.410 --rc geninfo_unexecuted_blocks=1 00:05:32.410 00:05:32.410 ' 00:05:32.410 13:15:21 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:32.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:32.410 --rc genhtml_branch_coverage=1 00:05:32.410 --rc genhtml_function_coverage=1 00:05:32.410 --rc genhtml_legend=1 00:05:32.410 --rc geninfo_all_blocks=1 00:05:32.410 --rc geninfo_unexecuted_blocks=1 00:05:32.410 00:05:32.410 ' 00:05:32.410 13:15:21 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:32.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:32.410 --rc genhtml_branch_coverage=1 00:05:32.410 --rc genhtml_function_coverage=1 00:05:32.410 --rc genhtml_legend=1 00:05:32.410 --rc geninfo_all_blocks=1 00:05:32.410 --rc geninfo_unexecuted_blocks=1 00:05:32.410 00:05:32.410 ' 00:05:32.410 13:15:21 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:32.410 13:15:21 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:32.410 13:15:21 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:32.410 13:15:21 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:32.410 13:15:21 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:32.410 13:15:21 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:32.410 13:15:21 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:32.410 ************************************ 00:05:32.410 START TEST default_locks 00:05:32.410 ************************************ 00:05:32.410 13:15:21 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:05:32.410 13:15:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=58833 00:05:32.410 13:15:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 58833 00:05:32.410 13:15:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:32.410 13:15:21 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58833 ']' 00:05:32.410 13:15:21 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:32.410 13:15:21 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:32.410 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:32.410 13:15:21 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:32.410 13:15:21 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:32.410 13:15:21 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:32.410 [2024-11-17 13:15:21.501872] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:05:32.410 [2024-11-17 13:15:21.502430] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58833 ] 00:05:32.670 [2024-11-17 13:15:21.676157] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.670 [2024-11-17 13:15:21.793431] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.610 13:15:22 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:33.610 13:15:22 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:05:33.610 13:15:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 58833 00:05:33.610 13:15:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 58833 00:05:33.610 13:15:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:33.869 13:15:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 58833 00:05:33.869 13:15:22 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 58833 ']' 00:05:33.869 13:15:22 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 58833 00:05:33.869 13:15:22 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:05:33.869 13:15:22 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:33.869 13:15:22 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58833 00:05:33.869 13:15:22 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:33.869 13:15:22 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:33.869 killing process with pid 58833 00:05:33.869 13:15:22 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58833' 00:05:33.869 13:15:22 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 58833 00:05:33.869 13:15:22 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 58833 00:05:36.409 13:15:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 58833 00:05:36.409 13:15:25 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:05:36.409 13:15:25 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 58833 00:05:36.409 13:15:25 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:36.409 13:15:25 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:36.409 13:15:25 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:36.409 13:15:25 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:36.409 13:15:25 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 58833 00:05:36.409 13:15:25 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58833 ']' 00:05:36.409 13:15:25 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:36.409 13:15:25 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:36.409 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:36.409 13:15:25 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:36.409 13:15:25 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:36.409 13:15:25 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:36.409 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (58833) - No such process 00:05:36.409 ERROR: process (pid: 58833) is no longer running 00:05:36.409 13:15:25 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:36.409 13:15:25 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:05:36.409 13:15:25 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:05:36.409 13:15:25 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:36.409 13:15:25 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:36.409 13:15:25 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:36.409 13:15:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:36.409 13:15:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:36.409 13:15:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:36.409 13:15:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:36.409 00:05:36.409 real 0m3.939s 00:05:36.409 user 0m3.849s 00:05:36.409 sys 0m0.604s 00:05:36.409 13:15:25 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:36.409 13:15:25 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:36.409 ************************************ 00:05:36.409 END TEST default_locks 00:05:36.409 ************************************ 00:05:36.409 13:15:25 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:36.409 13:15:25 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:36.409 13:15:25 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:36.409 13:15:25 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:36.409 ************************************ 00:05:36.409 START TEST default_locks_via_rpc 00:05:36.409 ************************************ 00:05:36.409 13:15:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:05:36.409 13:15:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=58908 00:05:36.409 13:15:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:36.409 13:15:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 58908 00:05:36.409 13:15:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 58908 ']' 00:05:36.409 13:15:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:36.409 13:15:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:36.409 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:36.409 13:15:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:36.409 13:15:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:36.409 13:15:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:36.409 [2024-11-17 13:15:25.506072] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:05:36.409 [2024-11-17 13:15:25.506201] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58908 ] 00:05:36.669 [2024-11-17 13:15:25.678140] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.669 [2024-11-17 13:15:25.787502] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.608 13:15:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:37.608 13:15:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:37.608 13:15:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:37.608 13:15:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:37.608 13:15:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:37.608 13:15:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:37.608 13:15:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:37.608 13:15:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:37.608 13:15:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:37.608 13:15:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:37.608 13:15:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:37.608 13:15:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:37.608 13:15:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:37.608 13:15:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:37.608 13:15:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 58908 00:05:37.608 13:15:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 58908 00:05:37.608 13:15:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:38.176 13:15:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 58908 00:05:38.176 13:15:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 58908 ']' 00:05:38.177 13:15:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 58908 00:05:38.177 13:15:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:05:38.177 13:15:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:38.177 13:15:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58908 00:05:38.177 13:15:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:38.177 13:15:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:38.177 killing process with pid 58908 00:05:38.177 13:15:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58908' 00:05:38.177 13:15:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 58908 00:05:38.177 13:15:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 58908 00:05:40.717 00:05:40.717 real 0m4.156s 00:05:40.717 user 0m4.105s 00:05:40.717 sys 0m0.682s 00:05:40.718 13:15:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:40.718 13:15:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:40.718 ************************************ 00:05:40.718 END TEST default_locks_via_rpc 00:05:40.718 ************************************ 00:05:40.718 13:15:29 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:40.718 13:15:29 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:40.718 13:15:29 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:40.718 13:15:29 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:40.718 ************************************ 00:05:40.718 START TEST non_locking_app_on_locked_coremask 00:05:40.718 ************************************ 00:05:40.718 13:15:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:05:40.718 13:15:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=58982 00:05:40.718 13:15:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:40.718 13:15:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 58982 /var/tmp/spdk.sock 00:05:40.718 13:15:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58982 ']' 00:05:40.718 13:15:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:40.718 13:15:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:40.718 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:40.718 13:15:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:40.718 13:15:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:40.718 13:15:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:40.718 [2024-11-17 13:15:29.729701] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:05:40.718 [2024-11-17 13:15:29.730317] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58982 ] 00:05:40.718 [2024-11-17 13:15:29.888767] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.977 [2024-11-17 13:15:30.003048] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.914 13:15:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:41.914 13:15:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:41.914 13:15:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=59004 00:05:41.914 13:15:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:41.914 13:15:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 59004 /var/tmp/spdk2.sock 00:05:41.914 13:15:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59004 ']' 00:05:41.915 13:15:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:41.915 13:15:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:41.915 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:41.915 13:15:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:41.915 13:15:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:41.915 13:15:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:41.915 [2024-11-17 13:15:30.982204] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:05:41.915 [2024-11-17 13:15:30.982343] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59004 ] 00:05:42.174 [2024-11-17 13:15:31.152601] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:42.174 [2024-11-17 13:15:31.152652] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.174 [2024-11-17 13:15:31.381257] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.731 13:15:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:44.731 13:15:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:44.731 13:15:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 58982 00:05:44.731 13:15:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58982 00:05:44.731 13:15:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:44.990 13:15:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 58982 00:05:44.990 13:15:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58982 ']' 00:05:44.991 13:15:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58982 00:05:44.991 13:15:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:44.991 13:15:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:44.991 13:15:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58982 00:05:44.991 13:15:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:44.991 13:15:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:44.991 killing process with pid 58982 00:05:44.991 13:15:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58982' 00:05:44.991 13:15:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58982 00:05:44.991 13:15:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58982 00:05:50.269 13:15:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 59004 00:05:50.269 13:15:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59004 ']' 00:05:50.269 13:15:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59004 00:05:50.269 13:15:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:50.269 13:15:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:50.269 13:15:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59004 00:05:50.269 13:15:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:50.269 13:15:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:50.269 killing process with pid 59004 00:05:50.269 13:15:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59004' 00:05:50.269 13:15:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59004 00:05:50.269 13:15:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59004 00:05:52.178 00:05:52.178 real 0m11.546s 00:05:52.178 user 0m11.802s 00:05:52.178 sys 0m1.212s 00:05:52.178 13:15:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:52.178 13:15:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:52.178 ************************************ 00:05:52.178 END TEST non_locking_app_on_locked_coremask 00:05:52.178 ************************************ 00:05:52.178 13:15:41 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:52.178 13:15:41 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:52.178 13:15:41 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:52.178 13:15:41 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:52.178 ************************************ 00:05:52.178 START TEST locking_app_on_unlocked_coremask 00:05:52.178 ************************************ 00:05:52.178 13:15:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:05:52.178 13:15:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=59154 00:05:52.178 13:15:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 59154 /var/tmp/spdk.sock 00:05:52.178 13:15:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:52.178 13:15:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59154 ']' 00:05:52.178 13:15:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:52.178 13:15:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:52.178 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:52.178 13:15:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:52.178 13:15:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:52.178 13:15:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:52.178 [2024-11-17 13:15:41.333230] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:05:52.178 [2024-11-17 13:15:41.333369] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59154 ] 00:05:52.438 [2024-11-17 13:15:41.491408] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:52.438 [2024-11-17 13:15:41.491458] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.438 [2024-11-17 13:15:41.600316] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.377 13:15:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:53.377 13:15:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:53.377 13:15:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:53.377 13:15:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=59170 00:05:53.377 13:15:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 59170 /var/tmp/spdk2.sock 00:05:53.377 13:15:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59170 ']' 00:05:53.377 13:15:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:53.377 13:15:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:53.377 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:53.377 13:15:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:53.377 13:15:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:53.377 13:15:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:53.377 [2024-11-17 13:15:42.532240] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:05:53.377 [2024-11-17 13:15:42.532353] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59170 ] 00:05:53.637 [2024-11-17 13:15:42.699048] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.897 [2024-11-17 13:15:42.933718] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.473 13:15:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:56.473 13:15:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:56.473 13:15:45 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 59170 00:05:56.473 13:15:45 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59170 00:05:56.473 13:15:45 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:57.041 13:15:45 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 59154 00:05:57.041 13:15:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59154 ']' 00:05:57.041 13:15:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59154 00:05:57.041 13:15:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:57.041 13:15:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:57.041 13:15:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59154 00:05:57.041 13:15:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:57.041 13:15:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:57.041 killing process with pid 59154 00:05:57.041 13:15:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59154' 00:05:57.041 13:15:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59154 00:05:57.042 13:15:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59154 00:06:02.318 13:15:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 59170 00:06:02.318 13:15:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59170 ']' 00:06:02.318 13:15:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59170 00:06:02.318 13:15:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:02.318 13:15:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:02.318 13:15:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59170 00:06:02.318 13:15:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:02.318 13:15:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:02.318 killing process with pid 59170 00:06:02.318 13:15:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59170' 00:06:02.318 13:15:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59170 00:06:02.318 13:15:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59170 00:06:04.234 00:06:04.234 real 0m11.867s 00:06:04.234 user 0m12.109s 00:06:04.234 sys 0m1.374s 00:06:04.234 13:15:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:04.234 13:15:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:04.234 ************************************ 00:06:04.234 END TEST locking_app_on_unlocked_coremask 00:06:04.234 ************************************ 00:06:04.234 13:15:53 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:04.234 13:15:53 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:04.234 13:15:53 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:04.234 13:15:53 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:04.234 ************************************ 00:06:04.234 START TEST locking_app_on_locked_coremask 00:06:04.234 ************************************ 00:06:04.234 13:15:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:06:04.234 13:15:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=59321 00:06:04.234 13:15:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:04.234 13:15:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 59321 /var/tmp/spdk.sock 00:06:04.234 13:15:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59321 ']' 00:06:04.234 13:15:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:04.234 13:15:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:04.234 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:04.234 13:15:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:04.234 13:15:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:04.234 13:15:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:04.234 [2024-11-17 13:15:53.259042] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:06:04.234 [2024-11-17 13:15:53.259171] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59321 ] 00:06:04.234 [2024-11-17 13:15:53.431601] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.493 [2024-11-17 13:15:53.543388] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.432 13:15:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:05.432 13:15:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:05.432 13:15:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=59338 00:06:05.432 13:15:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:05.432 13:15:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 59338 /var/tmp/spdk2.sock 00:06:05.432 13:15:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:05.432 13:15:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59338 /var/tmp/spdk2.sock 00:06:05.432 13:15:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:05.432 13:15:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:05.432 13:15:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:05.432 13:15:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:05.432 13:15:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59338 /var/tmp/spdk2.sock 00:06:05.432 13:15:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59338 ']' 00:06:05.432 13:15:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:05.432 13:15:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:05.432 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:05.432 13:15:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:05.432 13:15:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:05.432 13:15:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:05.432 [2024-11-17 13:15:54.472048] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:06:05.432 [2024-11-17 13:15:54.472183] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59338 ] 00:06:05.432 [2024-11-17 13:15:54.637138] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 59321 has claimed it. 00:06:05.432 [2024-11-17 13:15:54.641244] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:06.017 ERROR: process (pid: 59338) is no longer running 00:06:06.017 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59338) - No such process 00:06:06.018 13:15:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:06.018 13:15:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:06.018 13:15:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:06.018 13:15:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:06.018 13:15:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:06.018 13:15:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:06.018 13:15:55 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 59321 00:06:06.018 13:15:55 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59321 00:06:06.018 13:15:55 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:06.277 13:15:55 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 59321 00:06:06.277 13:15:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59321 ']' 00:06:06.277 13:15:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59321 00:06:06.277 13:15:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:06.277 13:15:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:06.277 13:15:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59321 00:06:06.277 13:15:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:06.277 13:15:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:06.277 killing process with pid 59321 00:06:06.277 13:15:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59321' 00:06:06.277 13:15:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59321 00:06:06.277 13:15:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59321 00:06:08.817 00:06:08.817 real 0m4.593s 00:06:08.817 user 0m4.764s 00:06:08.817 sys 0m0.716s 00:06:08.817 13:15:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:08.817 13:15:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:08.817 ************************************ 00:06:08.817 END TEST locking_app_on_locked_coremask 00:06:08.817 ************************************ 00:06:08.817 13:15:57 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:08.817 13:15:57 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:08.817 13:15:57 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:08.817 13:15:57 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:08.817 ************************************ 00:06:08.817 START TEST locking_overlapped_coremask 00:06:08.817 ************************************ 00:06:08.817 13:15:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:06:08.817 13:15:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=59407 00:06:08.817 13:15:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:08.817 13:15:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 59407 /var/tmp/spdk.sock 00:06:08.817 13:15:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59407 ']' 00:06:08.817 13:15:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:08.817 13:15:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:08.817 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:08.817 13:15:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:08.817 13:15:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:08.817 13:15:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:08.817 [2024-11-17 13:15:57.920085] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:06:08.817 [2024-11-17 13:15:57.920229] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59407 ] 00:06:09.078 [2024-11-17 13:15:58.094659] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:09.078 [2024-11-17 13:15:58.218929] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:09.078 [2024-11-17 13:15:58.219067] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.078 [2024-11-17 13:15:58.219104] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:10.021 13:15:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:10.021 13:15:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:10.021 13:15:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=59429 00:06:10.021 13:15:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 59429 /var/tmp/spdk2.sock 00:06:10.021 13:15:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:10.021 13:15:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59429 /var/tmp/spdk2.sock 00:06:10.021 13:15:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:10.021 13:15:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:10.021 13:15:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:10.021 13:15:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:10.021 13:15:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:10.021 13:15:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59429 /var/tmp/spdk2.sock 00:06:10.021 13:15:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59429 ']' 00:06:10.021 13:15:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:10.021 13:15:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:10.021 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:10.021 13:15:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:10.021 13:15:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:10.021 13:15:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:10.021 [2024-11-17 13:15:59.203605] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:06:10.021 [2024-11-17 13:15:59.203731] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59429 ] 00:06:10.282 [2024-11-17 13:15:59.372982] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59407 has claimed it. 00:06:10.282 [2024-11-17 13:15:59.373057] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:10.851 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59429) - No such process 00:06:10.851 ERROR: process (pid: 59429) is no longer running 00:06:10.851 13:15:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:10.851 13:15:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:10.851 13:15:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:10.851 13:15:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:10.851 13:15:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:10.851 13:15:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:10.851 13:15:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:10.851 13:15:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:10.851 13:15:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:10.851 13:15:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:10.851 13:15:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 59407 00:06:10.851 13:15:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 59407 ']' 00:06:10.851 13:15:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 59407 00:06:10.851 13:15:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:06:10.851 13:15:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:10.851 13:15:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59407 00:06:10.851 13:15:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:10.851 13:15:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:10.851 killing process with pid 59407 00:06:10.851 13:15:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59407' 00:06:10.851 13:15:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 59407 00:06:10.851 13:15:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 59407 00:06:13.388 00:06:13.388 real 0m4.507s 00:06:13.388 user 0m12.280s 00:06:13.388 sys 0m0.594s 00:06:13.388 13:16:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:13.388 13:16:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:13.388 ************************************ 00:06:13.388 END TEST locking_overlapped_coremask 00:06:13.388 ************************************ 00:06:13.388 13:16:02 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:13.388 13:16:02 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:13.388 13:16:02 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:13.388 13:16:02 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:13.388 ************************************ 00:06:13.388 START TEST locking_overlapped_coremask_via_rpc 00:06:13.388 ************************************ 00:06:13.388 13:16:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:06:13.388 13:16:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59494 00:06:13.388 13:16:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:13.388 13:16:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 59494 /var/tmp/spdk.sock 00:06:13.388 13:16:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59494 ']' 00:06:13.388 13:16:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:13.388 13:16:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:13.388 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:13.388 13:16:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:13.388 13:16:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:13.388 13:16:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:13.388 [2024-11-17 13:16:02.493043] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:06:13.388 [2024-11-17 13:16:02.493175] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59494 ] 00:06:13.648 [2024-11-17 13:16:02.667317] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:13.648 [2024-11-17 13:16:02.667390] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:13.648 [2024-11-17 13:16:02.786357] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:13.648 [2024-11-17 13:16:02.786511] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.648 [2024-11-17 13:16:02.786539] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:14.588 13:16:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:14.588 13:16:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:14.588 13:16:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59518 00:06:14.588 13:16:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:14.588 13:16:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59518 /var/tmp/spdk2.sock 00:06:14.588 13:16:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59518 ']' 00:06:14.588 13:16:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:14.588 13:16:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:14.588 13:16:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:14.588 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:14.588 13:16:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:14.588 13:16:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:14.588 [2024-11-17 13:16:03.731394] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:06:14.588 [2024-11-17 13:16:03.731521] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59518 ] 00:06:14.847 [2024-11-17 13:16:03.898863] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:14.847 [2024-11-17 13:16:03.898913] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:15.123 [2024-11-17 13:16:04.148568] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:15.123 [2024-11-17 13:16:04.148651] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:15.123 [2024-11-17 13:16:04.148703] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:17.679 13:16:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:17.679 13:16:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:17.679 13:16:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:17.679 13:16:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:17.679 13:16:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:17.679 13:16:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:17.679 13:16:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:17.679 13:16:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:17.679 13:16:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:17.679 13:16:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:17.679 13:16:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:17.679 13:16:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:17.679 13:16:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:17.679 13:16:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:17.679 13:16:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:17.680 13:16:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:17.680 [2024-11-17 13:16:06.313410] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59494 has claimed it. 00:06:17.680 request: 00:06:17.680 { 00:06:17.680 "method": "framework_enable_cpumask_locks", 00:06:17.680 "req_id": 1 00:06:17.680 } 00:06:17.680 Got JSON-RPC error response 00:06:17.680 response: 00:06:17.680 { 00:06:17.680 "code": -32603, 00:06:17.680 "message": "Failed to claim CPU core: 2" 00:06:17.680 } 00:06:17.680 13:16:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:17.680 13:16:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:17.680 13:16:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:17.680 13:16:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:17.680 13:16:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:17.680 13:16:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 59494 /var/tmp/spdk.sock 00:06:17.680 13:16:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59494 ']' 00:06:17.680 13:16:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:17.680 13:16:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:17.680 13:16:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:17.680 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:17.680 13:16:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:17.680 13:16:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:17.680 13:16:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:17.680 13:16:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:17.680 13:16:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59518 /var/tmp/spdk2.sock 00:06:17.680 13:16:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59518 ']' 00:06:17.680 13:16:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:17.680 13:16:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:17.680 13:16:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:17.680 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:17.680 13:16:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:17.680 13:16:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:17.680 13:16:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:17.680 13:16:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:17.680 13:16:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:17.680 13:16:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:17.680 13:16:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:17.680 13:16:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:17.680 00:06:17.680 real 0m4.375s 00:06:17.680 user 0m1.321s 00:06:17.680 sys 0m0.181s 00:06:17.680 13:16:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:17.680 13:16:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:17.680 ************************************ 00:06:17.680 END TEST locking_overlapped_coremask_via_rpc 00:06:17.680 ************************************ 00:06:17.680 13:16:06 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:17.680 13:16:06 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59494 ]] 00:06:17.680 13:16:06 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59494 00:06:17.680 13:16:06 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59494 ']' 00:06:17.680 13:16:06 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59494 00:06:17.680 13:16:06 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:17.680 13:16:06 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:17.680 13:16:06 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59494 00:06:17.680 13:16:06 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:17.680 13:16:06 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:17.680 13:16:06 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59494' 00:06:17.680 killing process with pid 59494 00:06:17.680 13:16:06 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59494 00:06:17.680 13:16:06 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59494 00:06:20.224 13:16:09 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59518 ]] 00:06:20.224 13:16:09 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59518 00:06:20.224 13:16:09 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59518 ']' 00:06:20.224 13:16:09 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59518 00:06:20.224 13:16:09 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:20.224 13:16:09 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:20.224 13:16:09 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59518 00:06:20.224 13:16:09 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:20.224 13:16:09 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:20.224 13:16:09 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59518' 00:06:20.224 killing process with pid 59518 00:06:20.224 13:16:09 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59518 00:06:20.224 13:16:09 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59518 00:06:22.771 13:16:11 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:22.771 13:16:11 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:22.771 13:16:11 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59494 ]] 00:06:22.771 13:16:11 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59494 00:06:22.771 13:16:11 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59494 ']' 00:06:22.771 13:16:11 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59494 00:06:22.771 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59494) - No such process 00:06:22.771 Process with pid 59494 is not found 00:06:22.771 13:16:11 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59494 is not found' 00:06:22.771 13:16:11 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59518 ]] 00:06:22.771 13:16:11 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59518 00:06:22.771 13:16:11 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59518 ']' 00:06:22.771 13:16:11 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59518 00:06:22.771 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59518) - No such process 00:06:22.771 Process with pid 59518 is not found 00:06:22.771 13:16:11 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59518 is not found' 00:06:22.771 13:16:11 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:22.771 00:06:22.771 real 0m50.636s 00:06:22.771 user 1m26.733s 00:06:22.771 sys 0m6.563s 00:06:22.771 13:16:11 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:22.771 13:16:11 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:22.771 ************************************ 00:06:22.771 END TEST cpu_locks 00:06:22.771 ************************************ 00:06:22.771 00:06:22.771 real 1m22.215s 00:06:22.771 user 2m29.748s 00:06:22.771 sys 0m10.536s 00:06:22.771 13:16:11 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:22.771 13:16:11 event -- common/autotest_common.sh@10 -- # set +x 00:06:22.771 ************************************ 00:06:22.771 END TEST event 00:06:22.771 ************************************ 00:06:22.771 13:16:11 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:22.771 13:16:11 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:22.771 13:16:11 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:22.771 13:16:11 -- common/autotest_common.sh@10 -- # set +x 00:06:22.771 ************************************ 00:06:22.771 START TEST thread 00:06:22.771 ************************************ 00:06:22.771 13:16:11 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:23.031 * Looking for test storage... 00:06:23.031 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:23.031 13:16:12 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:23.031 13:16:12 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:06:23.031 13:16:12 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:23.031 13:16:12 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:23.031 13:16:12 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:23.031 13:16:12 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:23.031 13:16:12 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:23.031 13:16:12 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:23.031 13:16:12 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:23.031 13:16:12 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:23.031 13:16:12 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:23.031 13:16:12 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:23.031 13:16:12 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:23.031 13:16:12 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:23.031 13:16:12 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:23.031 13:16:12 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:23.031 13:16:12 thread -- scripts/common.sh@345 -- # : 1 00:06:23.031 13:16:12 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:23.031 13:16:12 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:23.031 13:16:12 thread -- scripts/common.sh@365 -- # decimal 1 00:06:23.031 13:16:12 thread -- scripts/common.sh@353 -- # local d=1 00:06:23.031 13:16:12 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:23.031 13:16:12 thread -- scripts/common.sh@355 -- # echo 1 00:06:23.031 13:16:12 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:23.031 13:16:12 thread -- scripts/common.sh@366 -- # decimal 2 00:06:23.031 13:16:12 thread -- scripts/common.sh@353 -- # local d=2 00:06:23.031 13:16:12 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:23.031 13:16:12 thread -- scripts/common.sh@355 -- # echo 2 00:06:23.031 13:16:12 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:23.031 13:16:12 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:23.031 13:16:12 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:23.031 13:16:12 thread -- scripts/common.sh@368 -- # return 0 00:06:23.031 13:16:12 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:23.031 13:16:12 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:23.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.031 --rc genhtml_branch_coverage=1 00:06:23.031 --rc genhtml_function_coverage=1 00:06:23.031 --rc genhtml_legend=1 00:06:23.031 --rc geninfo_all_blocks=1 00:06:23.031 --rc geninfo_unexecuted_blocks=1 00:06:23.031 00:06:23.031 ' 00:06:23.031 13:16:12 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:23.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.031 --rc genhtml_branch_coverage=1 00:06:23.031 --rc genhtml_function_coverage=1 00:06:23.031 --rc genhtml_legend=1 00:06:23.031 --rc geninfo_all_blocks=1 00:06:23.031 --rc geninfo_unexecuted_blocks=1 00:06:23.031 00:06:23.031 ' 00:06:23.031 13:16:12 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:23.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.031 --rc genhtml_branch_coverage=1 00:06:23.031 --rc genhtml_function_coverage=1 00:06:23.031 --rc genhtml_legend=1 00:06:23.031 --rc geninfo_all_blocks=1 00:06:23.031 --rc geninfo_unexecuted_blocks=1 00:06:23.031 00:06:23.031 ' 00:06:23.031 13:16:12 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:23.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.031 --rc genhtml_branch_coverage=1 00:06:23.031 --rc genhtml_function_coverage=1 00:06:23.031 --rc genhtml_legend=1 00:06:23.031 --rc geninfo_all_blocks=1 00:06:23.031 --rc geninfo_unexecuted_blocks=1 00:06:23.031 00:06:23.031 ' 00:06:23.031 13:16:12 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:23.031 13:16:12 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:23.031 13:16:12 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:23.031 13:16:12 thread -- common/autotest_common.sh@10 -- # set +x 00:06:23.031 ************************************ 00:06:23.031 START TEST thread_poller_perf 00:06:23.031 ************************************ 00:06:23.031 13:16:12 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:23.031 [2024-11-17 13:16:12.185712] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:06:23.032 [2024-11-17 13:16:12.185811] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59713 ] 00:06:23.291 [2024-11-17 13:16:12.359626] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.291 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:23.291 [2024-11-17 13:16:12.466445] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.671 [2024-11-17T13:16:13.895Z] ====================================== 00:06:24.671 [2024-11-17T13:16:13.895Z] busy:2301195694 (cyc) 00:06:24.671 [2024-11-17T13:16:13.895Z] total_run_count: 405000 00:06:24.671 [2024-11-17T13:16:13.895Z] tsc_hz: 2290000000 (cyc) 00:06:24.671 [2024-11-17T13:16:13.895Z] ====================================== 00:06:24.671 [2024-11-17T13:16:13.895Z] poller_cost: 5681 (cyc), 2480 (nsec) 00:06:24.671 00:06:24.671 real 0m1.553s 00:06:24.671 user 0m1.353s 00:06:24.671 sys 0m0.093s 00:06:24.671 13:16:13 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:24.671 13:16:13 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:24.671 ************************************ 00:06:24.671 END TEST thread_poller_perf 00:06:24.671 ************************************ 00:06:24.671 13:16:13 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:24.671 13:16:13 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:24.671 13:16:13 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:24.671 13:16:13 thread -- common/autotest_common.sh@10 -- # set +x 00:06:24.671 ************************************ 00:06:24.671 START TEST thread_poller_perf 00:06:24.671 ************************************ 00:06:24.671 13:16:13 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:24.671 [2024-11-17 13:16:13.803334] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:06:24.671 [2024-11-17 13:16:13.803467] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59744 ] 00:06:24.931 [2024-11-17 13:16:13.978458] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.931 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:24.931 [2024-11-17 13:16:14.091939] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.312 [2024-11-17T13:16:15.536Z] ====================================== 00:06:26.312 [2024-11-17T13:16:15.536Z] busy:2293572790 (cyc) 00:06:26.312 [2024-11-17T13:16:15.536Z] total_run_count: 5261000 00:06:26.312 [2024-11-17T13:16:15.536Z] tsc_hz: 2290000000 (cyc) 00:06:26.312 [2024-11-17T13:16:15.536Z] ====================================== 00:06:26.312 [2024-11-17T13:16:15.536Z] poller_cost: 435 (cyc), 189 (nsec) 00:06:26.312 00:06:26.312 real 0m1.555s 00:06:26.312 user 0m1.357s 00:06:26.312 sys 0m0.091s 00:06:26.312 13:16:15 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:26.312 13:16:15 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:26.312 ************************************ 00:06:26.312 END TEST thread_poller_perf 00:06:26.312 ************************************ 00:06:26.312 13:16:15 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:26.312 00:06:26.312 real 0m3.453s 00:06:26.312 user 0m2.882s 00:06:26.312 sys 0m0.375s 00:06:26.312 13:16:15 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:26.312 13:16:15 thread -- common/autotest_common.sh@10 -- # set +x 00:06:26.312 ************************************ 00:06:26.312 END TEST thread 00:06:26.312 ************************************ 00:06:26.312 13:16:15 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:26.312 13:16:15 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:26.312 13:16:15 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:26.312 13:16:15 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:26.312 13:16:15 -- common/autotest_common.sh@10 -- # set +x 00:06:26.312 ************************************ 00:06:26.312 START TEST app_cmdline 00:06:26.312 ************************************ 00:06:26.312 13:16:15 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:26.572 * Looking for test storage... 00:06:26.572 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:26.572 13:16:15 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:26.572 13:16:15 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:06:26.572 13:16:15 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:26.573 13:16:15 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:26.573 13:16:15 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:26.573 13:16:15 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:26.573 13:16:15 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:26.573 13:16:15 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:26.573 13:16:15 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:26.573 13:16:15 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:26.573 13:16:15 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:26.573 13:16:15 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:26.573 13:16:15 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:26.573 13:16:15 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:26.573 13:16:15 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:26.573 13:16:15 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:26.573 13:16:15 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:26.573 13:16:15 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:26.573 13:16:15 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:26.573 13:16:15 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:26.573 13:16:15 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:26.573 13:16:15 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:26.573 13:16:15 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:26.573 13:16:15 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:26.573 13:16:15 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:26.573 13:16:15 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:26.573 13:16:15 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:26.573 13:16:15 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:26.573 13:16:15 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:26.573 13:16:15 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:26.573 13:16:15 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:26.573 13:16:15 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:26.573 13:16:15 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:26.573 13:16:15 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:26.573 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.573 --rc genhtml_branch_coverage=1 00:06:26.573 --rc genhtml_function_coverage=1 00:06:26.573 --rc genhtml_legend=1 00:06:26.573 --rc geninfo_all_blocks=1 00:06:26.573 --rc geninfo_unexecuted_blocks=1 00:06:26.573 00:06:26.573 ' 00:06:26.573 13:16:15 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:26.573 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.573 --rc genhtml_branch_coverage=1 00:06:26.573 --rc genhtml_function_coverage=1 00:06:26.573 --rc genhtml_legend=1 00:06:26.573 --rc geninfo_all_blocks=1 00:06:26.573 --rc geninfo_unexecuted_blocks=1 00:06:26.573 00:06:26.573 ' 00:06:26.573 13:16:15 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:26.573 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.573 --rc genhtml_branch_coverage=1 00:06:26.573 --rc genhtml_function_coverage=1 00:06:26.573 --rc genhtml_legend=1 00:06:26.573 --rc geninfo_all_blocks=1 00:06:26.573 --rc geninfo_unexecuted_blocks=1 00:06:26.573 00:06:26.573 ' 00:06:26.573 13:16:15 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:26.573 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.573 --rc genhtml_branch_coverage=1 00:06:26.573 --rc genhtml_function_coverage=1 00:06:26.573 --rc genhtml_legend=1 00:06:26.573 --rc geninfo_all_blocks=1 00:06:26.573 --rc geninfo_unexecuted_blocks=1 00:06:26.573 00:06:26.573 ' 00:06:26.573 13:16:15 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:26.573 13:16:15 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=59833 00:06:26.573 13:16:15 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:26.573 13:16:15 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 59833 00:06:26.573 13:16:15 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 59833 ']' 00:06:26.573 13:16:15 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:26.573 13:16:15 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:26.573 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:26.573 13:16:15 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:26.573 13:16:15 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:26.573 13:16:15 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:26.573 [2024-11-17 13:16:15.743950] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:06:26.573 [2024-11-17 13:16:15.744074] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59833 ] 00:06:26.833 [2024-11-17 13:16:15.918142] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.833 [2024-11-17 13:16:16.033884] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.774 13:16:16 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:27.774 13:16:16 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:06:27.774 13:16:16 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:28.034 { 00:06:28.034 "version": "SPDK v25.01-pre git sha1 ca87521f7", 00:06:28.034 "fields": { 00:06:28.034 "major": 25, 00:06:28.034 "minor": 1, 00:06:28.034 "patch": 0, 00:06:28.034 "suffix": "-pre", 00:06:28.034 "commit": "ca87521f7" 00:06:28.034 } 00:06:28.034 } 00:06:28.034 13:16:17 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:28.034 13:16:17 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:28.034 13:16:17 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:28.034 13:16:17 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:28.034 13:16:17 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:28.034 13:16:17 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:28.034 13:16:17 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:28.034 13:16:17 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:28.034 13:16:17 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:28.034 13:16:17 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:28.034 13:16:17 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:28.034 13:16:17 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:28.034 13:16:17 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:28.034 13:16:17 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:06:28.034 13:16:17 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:28.034 13:16:17 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:28.034 13:16:17 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:28.034 13:16:17 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:28.034 13:16:17 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:28.034 13:16:17 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:28.034 13:16:17 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:28.034 13:16:17 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:28.034 13:16:17 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:28.034 13:16:17 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:28.294 request: 00:06:28.294 { 00:06:28.294 "method": "env_dpdk_get_mem_stats", 00:06:28.294 "req_id": 1 00:06:28.294 } 00:06:28.294 Got JSON-RPC error response 00:06:28.294 response: 00:06:28.294 { 00:06:28.294 "code": -32601, 00:06:28.294 "message": "Method not found" 00:06:28.294 } 00:06:28.294 13:16:17 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:06:28.294 13:16:17 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:28.294 13:16:17 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:28.294 13:16:17 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:28.294 13:16:17 app_cmdline -- app/cmdline.sh@1 -- # killprocess 59833 00:06:28.294 13:16:17 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 59833 ']' 00:06:28.294 13:16:17 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 59833 00:06:28.294 13:16:17 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:06:28.294 13:16:17 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:28.294 13:16:17 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59833 00:06:28.294 13:16:17 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:28.294 13:16:17 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:28.294 killing process with pid 59833 00:06:28.294 13:16:17 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59833' 00:06:28.294 13:16:17 app_cmdline -- common/autotest_common.sh@973 -- # kill 59833 00:06:28.294 13:16:17 app_cmdline -- common/autotest_common.sh@978 -- # wait 59833 00:06:30.831 00:06:30.831 real 0m4.272s 00:06:30.831 user 0m4.478s 00:06:30.831 sys 0m0.602s 00:06:30.831 13:16:19 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:30.831 ************************************ 00:06:30.831 END TEST app_cmdline 00:06:30.831 ************************************ 00:06:30.831 13:16:19 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:30.831 13:16:19 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:30.831 13:16:19 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:30.831 13:16:19 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:30.831 13:16:19 -- common/autotest_common.sh@10 -- # set +x 00:06:30.831 ************************************ 00:06:30.831 START TEST version 00:06:30.831 ************************************ 00:06:30.831 13:16:19 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:30.831 * Looking for test storage... 00:06:30.831 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:30.831 13:16:19 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:30.831 13:16:19 version -- common/autotest_common.sh@1693 -- # lcov --version 00:06:30.831 13:16:19 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:30.831 13:16:19 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:30.831 13:16:19 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:30.831 13:16:19 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:30.831 13:16:19 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:30.831 13:16:19 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:30.831 13:16:19 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:30.831 13:16:19 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:30.831 13:16:19 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:30.831 13:16:19 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:30.831 13:16:19 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:30.831 13:16:19 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:30.831 13:16:19 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:30.831 13:16:19 version -- scripts/common.sh@344 -- # case "$op" in 00:06:30.831 13:16:19 version -- scripts/common.sh@345 -- # : 1 00:06:30.831 13:16:19 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:30.831 13:16:19 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:30.831 13:16:19 version -- scripts/common.sh@365 -- # decimal 1 00:06:30.831 13:16:19 version -- scripts/common.sh@353 -- # local d=1 00:06:30.831 13:16:19 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:30.831 13:16:19 version -- scripts/common.sh@355 -- # echo 1 00:06:30.831 13:16:19 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:30.831 13:16:19 version -- scripts/common.sh@366 -- # decimal 2 00:06:30.831 13:16:19 version -- scripts/common.sh@353 -- # local d=2 00:06:30.831 13:16:19 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:30.831 13:16:19 version -- scripts/common.sh@355 -- # echo 2 00:06:30.831 13:16:19 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:30.831 13:16:19 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:30.831 13:16:19 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:30.831 13:16:19 version -- scripts/common.sh@368 -- # return 0 00:06:30.832 13:16:19 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:30.832 13:16:19 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:30.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.832 --rc genhtml_branch_coverage=1 00:06:30.832 --rc genhtml_function_coverage=1 00:06:30.832 --rc genhtml_legend=1 00:06:30.832 --rc geninfo_all_blocks=1 00:06:30.832 --rc geninfo_unexecuted_blocks=1 00:06:30.832 00:06:30.832 ' 00:06:30.832 13:16:19 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:30.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.832 --rc genhtml_branch_coverage=1 00:06:30.832 --rc genhtml_function_coverage=1 00:06:30.832 --rc genhtml_legend=1 00:06:30.832 --rc geninfo_all_blocks=1 00:06:30.832 --rc geninfo_unexecuted_blocks=1 00:06:30.832 00:06:30.832 ' 00:06:30.832 13:16:19 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:30.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.832 --rc genhtml_branch_coverage=1 00:06:30.832 --rc genhtml_function_coverage=1 00:06:30.832 --rc genhtml_legend=1 00:06:30.832 --rc geninfo_all_blocks=1 00:06:30.832 --rc geninfo_unexecuted_blocks=1 00:06:30.832 00:06:30.832 ' 00:06:30.832 13:16:19 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:30.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.832 --rc genhtml_branch_coverage=1 00:06:30.832 --rc genhtml_function_coverage=1 00:06:30.832 --rc genhtml_legend=1 00:06:30.832 --rc geninfo_all_blocks=1 00:06:30.832 --rc geninfo_unexecuted_blocks=1 00:06:30.832 00:06:30.832 ' 00:06:30.832 13:16:19 version -- app/version.sh@17 -- # get_header_version major 00:06:30.832 13:16:19 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:30.832 13:16:19 version -- app/version.sh@14 -- # cut -f2 00:06:30.832 13:16:19 version -- app/version.sh@14 -- # tr -d '"' 00:06:30.832 13:16:19 version -- app/version.sh@17 -- # major=25 00:06:30.832 13:16:19 version -- app/version.sh@18 -- # get_header_version minor 00:06:30.832 13:16:19 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:30.832 13:16:19 version -- app/version.sh@14 -- # cut -f2 00:06:30.832 13:16:19 version -- app/version.sh@14 -- # tr -d '"' 00:06:30.832 13:16:19 version -- app/version.sh@18 -- # minor=1 00:06:30.832 13:16:20 version -- app/version.sh@19 -- # get_header_version patch 00:06:30.832 13:16:20 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:30.832 13:16:20 version -- app/version.sh@14 -- # cut -f2 00:06:30.832 13:16:20 version -- app/version.sh@14 -- # tr -d '"' 00:06:30.832 13:16:20 version -- app/version.sh@19 -- # patch=0 00:06:30.832 13:16:20 version -- app/version.sh@20 -- # get_header_version suffix 00:06:30.832 13:16:20 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:30.832 13:16:20 version -- app/version.sh@14 -- # cut -f2 00:06:30.832 13:16:20 version -- app/version.sh@14 -- # tr -d '"' 00:06:30.832 13:16:20 version -- app/version.sh@20 -- # suffix=-pre 00:06:30.832 13:16:20 version -- app/version.sh@22 -- # version=25.1 00:06:30.832 13:16:20 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:30.832 13:16:20 version -- app/version.sh@28 -- # version=25.1rc0 00:06:30.832 13:16:20 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:30.832 13:16:20 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:31.092 13:16:20 version -- app/version.sh@30 -- # py_version=25.1rc0 00:06:31.092 13:16:20 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:06:31.092 00:06:31.092 real 0m0.310s 00:06:31.092 user 0m0.196s 00:06:31.092 sys 0m0.173s 00:06:31.092 13:16:20 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:31.092 13:16:20 version -- common/autotest_common.sh@10 -- # set +x 00:06:31.092 ************************************ 00:06:31.092 END TEST version 00:06:31.092 ************************************ 00:06:31.092 13:16:20 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:31.092 13:16:20 -- spdk/autotest.sh@188 -- # [[ 1 -eq 1 ]] 00:06:31.092 13:16:20 -- spdk/autotest.sh@189 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:06:31.092 13:16:20 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:31.092 13:16:20 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:31.092 13:16:20 -- common/autotest_common.sh@10 -- # set +x 00:06:31.092 ************************************ 00:06:31.092 START TEST bdev_raid 00:06:31.092 ************************************ 00:06:31.092 13:16:20 bdev_raid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:06:31.092 * Looking for test storage... 00:06:31.092 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:06:31.092 13:16:20 bdev_raid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:31.092 13:16:20 bdev_raid -- common/autotest_common.sh@1693 -- # lcov --version 00:06:31.092 13:16:20 bdev_raid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:31.352 13:16:20 bdev_raid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:31.352 13:16:20 bdev_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:31.352 13:16:20 bdev_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:31.352 13:16:20 bdev_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:31.352 13:16:20 bdev_raid -- scripts/common.sh@336 -- # IFS=.-: 00:06:31.352 13:16:20 bdev_raid -- scripts/common.sh@336 -- # read -ra ver1 00:06:31.352 13:16:20 bdev_raid -- scripts/common.sh@337 -- # IFS=.-: 00:06:31.352 13:16:20 bdev_raid -- scripts/common.sh@337 -- # read -ra ver2 00:06:31.352 13:16:20 bdev_raid -- scripts/common.sh@338 -- # local 'op=<' 00:06:31.352 13:16:20 bdev_raid -- scripts/common.sh@340 -- # ver1_l=2 00:06:31.352 13:16:20 bdev_raid -- scripts/common.sh@341 -- # ver2_l=1 00:06:31.352 13:16:20 bdev_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:31.352 13:16:20 bdev_raid -- scripts/common.sh@344 -- # case "$op" in 00:06:31.352 13:16:20 bdev_raid -- scripts/common.sh@345 -- # : 1 00:06:31.352 13:16:20 bdev_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:31.352 13:16:20 bdev_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:31.352 13:16:20 bdev_raid -- scripts/common.sh@365 -- # decimal 1 00:06:31.352 13:16:20 bdev_raid -- scripts/common.sh@353 -- # local d=1 00:06:31.352 13:16:20 bdev_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:31.352 13:16:20 bdev_raid -- scripts/common.sh@355 -- # echo 1 00:06:31.352 13:16:20 bdev_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:06:31.352 13:16:20 bdev_raid -- scripts/common.sh@366 -- # decimal 2 00:06:31.352 13:16:20 bdev_raid -- scripts/common.sh@353 -- # local d=2 00:06:31.352 13:16:20 bdev_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:31.352 13:16:20 bdev_raid -- scripts/common.sh@355 -- # echo 2 00:06:31.352 13:16:20 bdev_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:06:31.352 13:16:20 bdev_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:31.352 13:16:20 bdev_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:31.352 13:16:20 bdev_raid -- scripts/common.sh@368 -- # return 0 00:06:31.352 13:16:20 bdev_raid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:31.352 13:16:20 bdev_raid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:31.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.352 --rc genhtml_branch_coverage=1 00:06:31.352 --rc genhtml_function_coverage=1 00:06:31.352 --rc genhtml_legend=1 00:06:31.352 --rc geninfo_all_blocks=1 00:06:31.352 --rc geninfo_unexecuted_blocks=1 00:06:31.352 00:06:31.352 ' 00:06:31.352 13:16:20 bdev_raid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:31.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.352 --rc genhtml_branch_coverage=1 00:06:31.352 --rc genhtml_function_coverage=1 00:06:31.352 --rc genhtml_legend=1 00:06:31.352 --rc geninfo_all_blocks=1 00:06:31.352 --rc geninfo_unexecuted_blocks=1 00:06:31.352 00:06:31.352 ' 00:06:31.352 13:16:20 bdev_raid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:31.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.352 --rc genhtml_branch_coverage=1 00:06:31.352 --rc genhtml_function_coverage=1 00:06:31.352 --rc genhtml_legend=1 00:06:31.352 --rc geninfo_all_blocks=1 00:06:31.352 --rc geninfo_unexecuted_blocks=1 00:06:31.352 00:06:31.352 ' 00:06:31.352 13:16:20 bdev_raid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:31.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.352 --rc genhtml_branch_coverage=1 00:06:31.352 --rc genhtml_function_coverage=1 00:06:31.352 --rc genhtml_legend=1 00:06:31.352 --rc geninfo_all_blocks=1 00:06:31.352 --rc geninfo_unexecuted_blocks=1 00:06:31.352 00:06:31.352 ' 00:06:31.352 13:16:20 bdev_raid -- bdev/bdev_raid.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:31.352 13:16:20 bdev_raid -- bdev/nbd_common.sh@6 -- # set -e 00:06:31.352 13:16:20 bdev_raid -- bdev/bdev_raid.sh@14 -- # rpc_py=rpc_cmd 00:06:31.352 13:16:20 bdev_raid -- bdev/bdev_raid.sh@946 -- # mkdir -p /raidtest 00:06:31.352 13:16:20 bdev_raid -- bdev/bdev_raid.sh@947 -- # trap 'cleanup; exit 1' EXIT 00:06:31.352 13:16:20 bdev_raid -- bdev/bdev_raid.sh@949 -- # base_blocklen=512 00:06:31.352 13:16:20 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid1_resize_data_offset_test raid_resize_data_offset_test 00:06:31.352 13:16:20 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:31.352 13:16:20 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:31.352 13:16:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:31.352 ************************************ 00:06:31.352 START TEST raid1_resize_data_offset_test 00:06:31.352 ************************************ 00:06:31.353 13:16:20 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1129 -- # raid_resize_data_offset_test 00:06:31.353 Process raid pid: 60025 00:06:31.353 13:16:20 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@917 -- # raid_pid=60025 00:06:31.353 13:16:20 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@918 -- # echo 'Process raid pid: 60025' 00:06:31.353 13:16:20 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@916 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:31.353 13:16:20 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@919 -- # waitforlisten 60025 00:06:31.353 13:16:20 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@835 -- # '[' -z 60025 ']' 00:06:31.353 13:16:20 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:31.353 13:16:20 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:31.353 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:31.353 13:16:20 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:31.353 13:16:20 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:31.353 13:16:20 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:31.353 [2024-11-17 13:16:20.466443] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:06:31.353 [2024-11-17 13:16:20.466559] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:31.613 [2024-11-17 13:16:20.628577] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.613 [2024-11-17 13:16:20.735290] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.872 [2024-11-17 13:16:20.933001] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:31.872 [2024-11-17 13:16:20.933039] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:32.131 13:16:21 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:32.131 13:16:21 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@868 -- # return 0 00:06:32.131 13:16:21 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@922 -- # rpc_cmd bdev_malloc_create -b malloc0 64 512 -o 16 00:06:32.131 13:16:21 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:32.131 13:16:21 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:32.392 malloc0 00:06:32.392 13:16:21 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:32.392 13:16:21 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@923 -- # rpc_cmd bdev_malloc_create -b malloc1 64 512 -o 16 00:06:32.392 13:16:21 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:32.392 13:16:21 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:32.392 malloc1 00:06:32.392 13:16:21 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:32.392 13:16:21 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@924 -- # rpc_cmd bdev_null_create null0 64 512 00:06:32.392 13:16:21 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:32.392 13:16:21 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:32.392 null0 00:06:32.392 13:16:21 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:32.392 13:16:21 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@926 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''malloc0 malloc1 null0'\''' -s 00:06:32.392 13:16:21 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:32.392 13:16:21 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:32.392 [2024-11-17 13:16:21.474246] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc0 is claimed 00:06:32.392 [2024-11-17 13:16:21.476031] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:06:32.392 [2024-11-17 13:16:21.476084] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev null0 is claimed 00:06:32.392 [2024-11-17 13:16:21.476243] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:32.392 [2024-11-17 13:16:21.476261] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 129024, blocklen 512 00:06:32.392 [2024-11-17 13:16:21.476577] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:06:32.392 [2024-11-17 13:16:21.476817] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:32.392 [2024-11-17 13:16:21.476844] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:06:32.392 [2024-11-17 13:16:21.477071] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:32.392 13:16:21 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:32.392 13:16:21 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:32.392 13:16:21 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:06:32.392 13:16:21 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:32.392 13:16:21 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:32.392 13:16:21 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:32.392 13:16:21 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # (( 2048 == 2048 )) 00:06:32.392 13:16:21 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@931 -- # rpc_cmd bdev_null_delete null0 00:06:32.392 13:16:21 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:32.392 13:16:21 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:32.392 [2024-11-17 13:16:21.538078] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: null0 00:06:32.392 13:16:21 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:32.392 13:16:21 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@935 -- # rpc_cmd bdev_malloc_create -b malloc2 512 512 -o 30 00:06:32.392 13:16:21 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:32.392 13:16:21 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:32.960 malloc2 00:06:32.960 13:16:22 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:32.960 13:16:22 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@936 -- # rpc_cmd bdev_raid_add_base_bdev Raid malloc2 00:06:32.960 13:16:22 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:32.960 13:16:22 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:32.960 [2024-11-17 13:16:22.065680] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:06:32.960 [2024-11-17 13:16:22.081296] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:32.960 13:16:22 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:32.960 [2024-11-17 13:16:22.083035] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev Raid 00:06:32.960 13:16:22 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:32.961 13:16:22 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:06:32.961 13:16:22 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:32.961 13:16:22 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:32.961 13:16:22 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:32.961 13:16:22 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # (( 2070 == 2070 )) 00:06:32.961 13:16:22 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@941 -- # killprocess 60025 00:06:32.961 13:16:22 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@954 -- # '[' -z 60025 ']' 00:06:32.961 13:16:22 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@958 -- # kill -0 60025 00:06:32.961 13:16:22 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@959 -- # uname 00:06:32.961 13:16:22 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:32.961 13:16:22 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60025 00:06:32.961 13:16:22 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:32.961 13:16:22 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:32.961 13:16:22 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60025' 00:06:32.961 killing process with pid 60025 00:06:32.961 13:16:22 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@973 -- # kill 60025 00:06:32.961 [2024-11-17 13:16:22.179846] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:32.961 13:16:22 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@978 -- # wait 60025 00:06:32.961 [2024-11-17 13:16:22.181667] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev Raid: Operation canceled 00:06:32.961 [2024-11-17 13:16:22.181727] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:32.961 [2024-11-17 13:16:22.181748] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: malloc2 00:06:33.220 [2024-11-17 13:16:22.216732] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:33.220 [2024-11-17 13:16:22.217057] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:33.220 [2024-11-17 13:16:22.217103] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:06:35.129 [2024-11-17 13:16:23.953091] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:36.113 13:16:25 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@943 -- # return 0 00:06:36.113 00:06:36.113 real 0m4.656s 00:06:36.113 user 0m4.546s 00:06:36.113 sys 0m0.557s 00:06:36.113 13:16:25 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:36.113 13:16:25 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:36.113 ************************************ 00:06:36.113 END TEST raid1_resize_data_offset_test 00:06:36.113 ************************************ 00:06:36.113 13:16:25 bdev_raid -- bdev/bdev_raid.sh@953 -- # run_test raid0_resize_superblock_test raid_resize_superblock_test 0 00:06:36.113 13:16:25 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:36.113 13:16:25 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:36.113 13:16:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:36.113 ************************************ 00:06:36.113 START TEST raid0_resize_superblock_test 00:06:36.113 ************************************ 00:06:36.114 13:16:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1129 -- # raid_resize_superblock_test 0 00:06:36.114 13:16:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=0 00:06:36.114 13:16:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=60106 00:06:36.114 Process raid pid: 60106 00:06:36.114 13:16:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 60106' 00:06:36.114 13:16:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:36.114 13:16:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 60106 00:06:36.114 13:16:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 60106 ']' 00:06:36.114 13:16:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:36.114 13:16:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:36.114 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:36.114 13:16:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:36.114 13:16:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:36.114 13:16:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:36.114 [2024-11-17 13:16:25.187277] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:06:36.114 [2024-11-17 13:16:25.187405] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:36.372 [2024-11-17 13:16:25.359302] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.372 [2024-11-17 13:16:25.469597] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.632 [2024-11-17 13:16:25.669252] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:36.632 [2024-11-17 13:16:25.669291] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:36.891 13:16:26 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:36.891 13:16:26 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:06:36.891 13:16:26 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:06:36.891 13:16:26 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:36.891 13:16:26 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:37.459 malloc0 00:06:37.460 13:16:26 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:37.460 13:16:26 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:06:37.460 13:16:26 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:37.460 13:16:26 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:37.460 [2024-11-17 13:16:26.549497] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:06:37.460 [2024-11-17 13:16:26.549574] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:37.460 [2024-11-17 13:16:26.549599] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:06:37.460 [2024-11-17 13:16:26.549612] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:37.460 [2024-11-17 13:16:26.551676] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:37.460 [2024-11-17 13:16:26.551715] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:06:37.460 pt0 00:06:37.460 13:16:26 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:37.460 13:16:26 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:06:37.460 13:16:26 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:37.460 13:16:26 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:37.460 2e781d87-47cd-42c8-9c3e-e69d8accefaa 00:06:37.460 13:16:26 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:37.460 13:16:26 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:06:37.460 13:16:26 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:37.460 13:16:26 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:37.460 eb0e0a7e-5457-4d2d-a27a-4dc071b8612e 00:06:37.460 13:16:26 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:37.460 13:16:26 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:06:37.460 13:16:26 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:37.460 13:16:26 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:37.460 34986752-4a65-4871-8290-f5fc22b43fd6 00:06:37.460 13:16:26 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:37.460 13:16:26 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:06:37.460 13:16:26 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@870 -- # rpc_cmd bdev_raid_create -n Raid -r 0 -z 64 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:06:37.460 13:16:26 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:37.460 13:16:26 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:37.460 [2024-11-17 13:16:26.681798] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev eb0e0a7e-5457-4d2d-a27a-4dc071b8612e is claimed 00:06:37.460 [2024-11-17 13:16:26.681892] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 34986752-4a65-4871-8290-f5fc22b43fd6 is claimed 00:06:37.460 [2024-11-17 13:16:26.682018] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:37.460 [2024-11-17 13:16:26.682033] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 245760, blocklen 512 00:06:37.460 [2024-11-17 13:16:26.682359] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:37.460 [2024-11-17 13:16:26.682601] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:37.460 [2024-11-17 13:16:26.682624] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:06:37.460 [2024-11-17 13:16:26.682798] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:37.719 13:16:26 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:37.719 13:16:26 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:06:37.719 13:16:26 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:06:37.719 13:16:26 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:37.719 13:16:26 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:37.719 13:16:26 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:37.719 13:16:26 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:06:37.719 13:16:26 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:06:37.719 13:16:26 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:06:37.719 13:16:26 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:37.719 13:16:26 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:37.719 13:16:26 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:37.719 13:16:26 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:06:37.719 13:16:26 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:37.719 13:16:26 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:37.719 13:16:26 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:37.719 13:16:26 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # jq '.[].num_blocks' 00:06:37.719 13:16:26 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:37.719 13:16:26 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:37.719 [2024-11-17 13:16:26.793832] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:37.719 13:16:26 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:37.719 13:16:26 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:37.719 13:16:26 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:37.719 13:16:26 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # (( 245760 == 245760 )) 00:06:37.719 13:16:26 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:06:37.719 13:16:26 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:37.719 13:16:26 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:37.719 [2024-11-17 13:16:26.837763] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:37.719 [2024-11-17 13:16:26.837797] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'eb0e0a7e-5457-4d2d-a27a-4dc071b8612e' was resized: old size 131072, new size 204800 00:06:37.719 13:16:26 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:37.719 13:16:26 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:06:37.719 13:16:26 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:37.719 13:16:26 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:37.719 [2024-11-17 13:16:26.849679] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:37.719 [2024-11-17 13:16:26.849708] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '34986752-4a65-4871-8290-f5fc22b43fd6' was resized: old size 131072, new size 204800 00:06:37.719 [2024-11-17 13:16:26.849751] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 245760 to 393216 00:06:37.719 13:16:26 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:37.719 13:16:26 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:06:37.719 13:16:26 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:06:37.719 13:16:26 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:37.719 13:16:26 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:37.719 13:16:26 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:37.719 13:16:26 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:06:37.719 13:16:26 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:06:37.719 13:16:26 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:37.719 13:16:26 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:37.719 13:16:26 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:06:37.719 13:16:26 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:37.719 13:16:26 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:06:37.980 13:16:26 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:37.980 13:16:26 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:37.980 13:16:26 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:37.980 13:16:26 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # jq '.[].num_blocks' 00:06:37.980 13:16:26 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:37.980 13:16:26 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:37.980 [2024-11-17 13:16:26.953576] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:37.980 13:16:26 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:37.980 13:16:26 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:37.980 13:16:26 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:37.980 13:16:26 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # (( 393216 == 393216 )) 00:06:37.980 13:16:26 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:06:37.980 13:16:26 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:37.980 13:16:26 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:37.980 [2024-11-17 13:16:26.997301] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:06:37.980 [2024-11-17 13:16:26.997370] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:06:37.980 [2024-11-17 13:16:26.997383] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:37.980 [2024-11-17 13:16:26.997400] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:06:37.980 [2024-11-17 13:16:26.997515] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:37.980 [2024-11-17 13:16:26.997548] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:37.980 [2024-11-17 13:16:26.997562] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:06:37.980 13:16:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:37.980 13:16:27 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:06:37.980 13:16:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:37.980 13:16:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:37.980 [2024-11-17 13:16:27.009191] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:06:37.980 [2024-11-17 13:16:27.009254] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:37.980 [2024-11-17 13:16:27.009276] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:06:37.980 [2024-11-17 13:16:27.009287] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:37.980 [2024-11-17 13:16:27.011471] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:37.980 [2024-11-17 13:16:27.011505] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:06:37.980 [2024-11-17 13:16:27.013157] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev eb0e0a7e-5457-4d2d-a27a-4dc071b8612e 00:06:37.980 [2024-11-17 13:16:27.013248] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev eb0e0a7e-5457-4d2d-a27a-4dc071b8612e is claimed 00:06:37.980 [2024-11-17 13:16:27.013394] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 34986752-4a65-4871-8290-f5fc22b43fd6 00:06:37.980 [2024-11-17 13:16:27.013436] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 34986752-4a65-4871-8290-f5fc22b43fd6 is claimed 00:06:37.980 [2024-11-17 13:16:27.013669] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 34986752-4a65-4871-8290-f5fc22b43fd6 (2) smaller than existing raid bdev Raid (3) 00:06:37.980 [2024-11-17 13:16:27.013701] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev eb0e0a7e-5457-4d2d-a27a-4dc071b8612e: File exists 00:06:37.980 [2024-11-17 13:16:27.013735] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:06:37.980 [2024-11-17 13:16:27.013761] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 393216, blocklen 512 00:06:37.980 [2024-11-17 13:16:27.014056] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:06:37.980 pt0 00:06:37.980 [2024-11-17 13:16:27.014255] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:06:37.980 [2024-11-17 13:16:27.014273] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:06:37.980 [2024-11-17 13:16:27.014455] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:37.980 13:16:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:37.980 13:16:27 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:06:37.980 13:16:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:37.980 13:16:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:37.980 13:16:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:37.980 13:16:27 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:37.980 13:16:27 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # jq '.[].num_blocks' 00:06:37.980 13:16:27 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:37.980 13:16:27 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:37.980 13:16:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:37.980 13:16:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:37.980 [2024-11-17 13:16:27.037418] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:37.980 13:16:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:37.980 13:16:27 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:37.980 13:16:27 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:37.980 13:16:27 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # (( 393216 == 393216 )) 00:06:37.980 13:16:27 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 60106 00:06:37.980 13:16:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 60106 ']' 00:06:37.980 13:16:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@958 -- # kill -0 60106 00:06:37.980 13:16:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@959 -- # uname 00:06:37.980 13:16:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:37.980 13:16:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60106 00:06:37.980 13:16:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:37.980 13:16:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:37.980 killing process with pid 60106 00:06:37.980 13:16:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60106' 00:06:37.980 13:16:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@973 -- # kill 60106 00:06:37.980 [2024-11-17 13:16:27.110057] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:37.980 [2024-11-17 13:16:27.110134] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:37.980 [2024-11-17 13:16:27.110193] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:37.980 [2024-11-17 13:16:27.110230] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:06:37.980 13:16:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@978 -- # wait 60106 00:06:39.362 [2024-11-17 13:16:28.520260] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:40.742 13:16:29 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:06:40.742 00:06:40.742 real 0m4.493s 00:06:40.742 user 0m4.717s 00:06:40.742 sys 0m0.541s 00:06:40.742 13:16:29 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:40.742 13:16:29 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:40.742 ************************************ 00:06:40.742 END TEST raid0_resize_superblock_test 00:06:40.742 ************************************ 00:06:40.742 13:16:29 bdev_raid -- bdev/bdev_raid.sh@954 -- # run_test raid1_resize_superblock_test raid_resize_superblock_test 1 00:06:40.742 13:16:29 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:40.742 13:16:29 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:40.742 13:16:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:40.742 ************************************ 00:06:40.742 START TEST raid1_resize_superblock_test 00:06:40.742 ************************************ 00:06:40.742 13:16:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1129 -- # raid_resize_superblock_test 1 00:06:40.742 13:16:29 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=1 00:06:40.742 13:16:29 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=60205 00:06:40.742 13:16:29 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:40.742 Process raid pid: 60205 00:06:40.742 13:16:29 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 60205' 00:06:40.742 13:16:29 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 60205 00:06:40.742 13:16:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 60205 ']' 00:06:40.742 13:16:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:40.742 13:16:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:40.742 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:40.742 13:16:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:40.742 13:16:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:40.742 13:16:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:40.742 [2024-11-17 13:16:29.751304] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:06:40.742 [2024-11-17 13:16:29.751431] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:40.742 [2024-11-17 13:16:29.925677] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.002 [2024-11-17 13:16:30.042006] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.260 [2024-11-17 13:16:30.245126] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:41.260 [2024-11-17 13:16:30.245190] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:41.519 13:16:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:41.519 13:16:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:06:41.519 13:16:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:06:41.519 13:16:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:41.519 13:16:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:42.088 malloc0 00:06:42.088 13:16:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:42.088 13:16:31 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:06:42.088 13:16:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:42.088 13:16:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:42.088 [2024-11-17 13:16:31.133853] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:06:42.088 [2024-11-17 13:16:31.133914] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:42.088 [2024-11-17 13:16:31.133954] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:06:42.088 [2024-11-17 13:16:31.133966] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:42.088 [2024-11-17 13:16:31.136009] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:42.088 [2024-11-17 13:16:31.136048] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:06:42.088 pt0 00:06:42.088 13:16:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:42.088 13:16:31 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:06:42.088 13:16:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:42.088 13:16:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:42.088 c6e0594d-f164-47da-9b30-75cf62371717 00:06:42.088 13:16:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:42.088 13:16:31 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:06:42.088 13:16:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:42.088 13:16:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:42.088 ae4d141c-1105-4117-9a61-d4f3f9270d97 00:06:42.088 13:16:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:42.088 13:16:31 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:06:42.088 13:16:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:42.088 13:16:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:42.088 0e43a407-4b7a-4487-80e9-e6946f8b2efb 00:06:42.088 13:16:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:42.088 13:16:31 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:06:42.088 13:16:31 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@871 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:06:42.088 13:16:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:42.088 13:16:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:42.088 [2024-11-17 13:16:31.265859] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev ae4d141c-1105-4117-9a61-d4f3f9270d97 is claimed 00:06:42.088 [2024-11-17 13:16:31.265942] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 0e43a407-4b7a-4487-80e9-e6946f8b2efb is claimed 00:06:42.088 [2024-11-17 13:16:31.266059] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:42.088 [2024-11-17 13:16:31.266074] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 122880, blocklen 512 00:06:42.088 [2024-11-17 13:16:31.266381] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:42.088 [2024-11-17 13:16:31.266605] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:42.088 [2024-11-17 13:16:31.266628] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:06:42.088 [2024-11-17 13:16:31.266790] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:42.088 13:16:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:42.088 13:16:31 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:06:42.088 13:16:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:42.088 13:16:31 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:06:42.088 13:16:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:42.088 13:16:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:42.349 13:16:31 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:06:42.349 13:16:31 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:06:42.349 13:16:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:42.349 13:16:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:42.349 13:16:31 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:06:42.349 13:16:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:42.349 13:16:31 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:06:42.349 13:16:31 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:42.349 13:16:31 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:42.349 13:16:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:42.349 13:16:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:42.349 13:16:31 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:42.349 13:16:31 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # jq '.[].num_blocks' 00:06:42.349 [2024-11-17 13:16:31.377852] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:42.349 13:16:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:42.349 13:16:31 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:42.349 13:16:31 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:42.349 13:16:31 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # (( 122880 == 122880 )) 00:06:42.349 13:16:31 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:06:42.349 13:16:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:42.349 13:16:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:42.349 [2024-11-17 13:16:31.425703] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:42.349 [2024-11-17 13:16:31.425733] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'ae4d141c-1105-4117-9a61-d4f3f9270d97' was resized: old size 131072, new size 204800 00:06:42.349 13:16:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:42.349 13:16:31 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:06:42.349 13:16:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:42.349 13:16:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:42.349 [2024-11-17 13:16:31.437640] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:42.349 [2024-11-17 13:16:31.437666] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '0e43a407-4b7a-4487-80e9-e6946f8b2efb' was resized: old size 131072, new size 204800 00:06:42.349 [2024-11-17 13:16:31.437690] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 122880 to 196608 00:06:42.349 13:16:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:42.349 13:16:31 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:06:42.349 13:16:31 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:06:42.349 13:16:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:42.349 13:16:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:42.349 13:16:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:42.349 13:16:31 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:06:42.349 13:16:31 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:06:42.349 13:16:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:42.349 13:16:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:42.349 13:16:31 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:06:42.349 13:16:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:42.349 13:16:31 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:06:42.349 13:16:31 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:42.349 13:16:31 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # jq '.[].num_blocks' 00:06:42.349 13:16:31 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:42.349 13:16:31 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:42.349 13:16:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:42.349 13:16:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:42.349 [2024-11-17 13:16:31.549552] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:42.349 13:16:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:42.349 13:16:31 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:42.349 13:16:31 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:42.349 13:16:31 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # (( 196608 == 196608 )) 00:06:42.610 13:16:31 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:06:42.610 13:16:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:42.611 13:16:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:42.611 [2024-11-17 13:16:31.577342] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:06:42.611 [2024-11-17 13:16:31.577411] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:06:42.611 [2024-11-17 13:16:31.577439] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:06:42.611 [2024-11-17 13:16:31.577576] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:42.611 [2024-11-17 13:16:31.577869] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:42.611 [2024-11-17 13:16:31.577952] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:42.611 [2024-11-17 13:16:31.577967] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:06:42.611 13:16:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:42.611 13:16:31 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:06:42.611 13:16:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:42.611 13:16:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:42.611 [2024-11-17 13:16:31.589265] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:06:42.611 [2024-11-17 13:16:31.589318] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:42.611 [2024-11-17 13:16:31.589337] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:06:42.611 [2024-11-17 13:16:31.589350] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:42.611 [2024-11-17 13:16:31.591454] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:42.611 [2024-11-17 13:16:31.591504] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:06:42.611 [2024-11-17 13:16:31.593089] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev ae4d141c-1105-4117-9a61-d4f3f9270d97 00:06:42.611 [2024-11-17 13:16:31.593183] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev ae4d141c-1105-4117-9a61-d4f3f9270d97 is claimed 00:06:42.611 [2024-11-17 13:16:31.593339] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 0e43a407-4b7a-4487-80e9-e6946f8b2efb 00:06:42.611 [2024-11-17 13:16:31.593413] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 0e43a407-4b7a-4487-80e9-e6946f8b2efb is claimed 00:06:42.611 [2024-11-17 13:16:31.593581] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 0e43a407-4b7a-4487-80e9-e6946f8b2efb (2) smaller than existing raid bdev Raid (3) 00:06:42.611 [2024-11-17 13:16:31.593624] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev ae4d141c-1105-4117-9a61-d4f3f9270d97: File exists 00:06:42.611 [2024-11-17 13:16:31.593676] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:06:42.611 [2024-11-17 13:16:31.593699] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:06:42.611 [2024-11-17 13:16:31.593956] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:06:42.611 [2024-11-17 13:16:31.594130] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:06:42.611 [2024-11-17 13:16:31.594147] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:06:42.611 [2024-11-17 13:16:31.594350] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:42.611 pt0 00:06:42.611 13:16:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:42.611 13:16:31 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:06:42.611 13:16:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:42.611 13:16:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:42.611 13:16:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:42.611 13:16:31 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:42.611 13:16:31 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:42.611 13:16:31 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:42.611 13:16:31 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # jq '.[].num_blocks' 00:06:42.611 13:16:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:42.611 13:16:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:42.611 [2024-11-17 13:16:31.617803] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:42.611 13:16:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:42.611 13:16:31 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:42.611 13:16:31 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:42.611 13:16:31 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # (( 196608 == 196608 )) 00:06:42.611 13:16:31 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 60205 00:06:42.611 13:16:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 60205 ']' 00:06:42.611 13:16:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@958 -- # kill -0 60205 00:06:42.611 13:16:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@959 -- # uname 00:06:42.611 13:16:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:42.611 13:16:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60205 00:06:42.611 13:16:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:42.611 13:16:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:42.611 killing process with pid 60205 00:06:42.611 13:16:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60205' 00:06:42.611 13:16:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@973 -- # kill 60205 00:06:42.611 [2024-11-17 13:16:31.684826] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:42.611 [2024-11-17 13:16:31.684900] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:42.611 [2024-11-17 13:16:31.684963] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:42.611 [2024-11-17 13:16:31.684973] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:06:42.611 13:16:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@978 -- # wait 60205 00:06:43.990 [2024-11-17 13:16:33.088812] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:45.369 13:16:34 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:06:45.369 00:06:45.369 real 0m4.542s 00:06:45.369 user 0m4.769s 00:06:45.369 sys 0m0.503s 00:06:45.369 13:16:34 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:45.369 13:16:34 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:45.369 ************************************ 00:06:45.369 END TEST raid1_resize_superblock_test 00:06:45.369 ************************************ 00:06:45.369 13:16:34 bdev_raid -- bdev/bdev_raid.sh@956 -- # uname -s 00:06:45.369 13:16:34 bdev_raid -- bdev/bdev_raid.sh@956 -- # '[' Linux = Linux ']' 00:06:45.369 13:16:34 bdev_raid -- bdev/bdev_raid.sh@956 -- # modprobe -n nbd 00:06:45.369 13:16:34 bdev_raid -- bdev/bdev_raid.sh@957 -- # has_nbd=true 00:06:45.369 13:16:34 bdev_raid -- bdev/bdev_raid.sh@958 -- # modprobe nbd 00:06:45.369 13:16:34 bdev_raid -- bdev/bdev_raid.sh@959 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:06:45.369 13:16:34 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:45.369 13:16:34 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:45.369 13:16:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:45.369 ************************************ 00:06:45.369 START TEST raid_function_test_raid0 00:06:45.369 ************************************ 00:06:45.369 13:16:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1129 -- # raid_function_test raid0 00:06:45.369 13:16:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@64 -- # local raid_level=raid0 00:06:45.369 13:16:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:06:45.369 13:16:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:06:45.369 13:16:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@69 -- # raid_pid=60307 00:06:45.370 13:16:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:45.370 Process raid pid: 60307 00:06:45.370 13:16:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60307' 00:06:45.370 13:16:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@71 -- # waitforlisten 60307 00:06:45.370 13:16:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@835 -- # '[' -z 60307 ']' 00:06:45.370 13:16:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:45.370 13:16:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:45.370 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:45.370 13:16:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:45.370 13:16:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:45.370 13:16:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:45.370 [2024-11-17 13:16:34.383827] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:06:45.370 [2024-11-17 13:16:34.383950] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:45.370 [2024-11-17 13:16:34.562517] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.629 [2024-11-17 13:16:34.679638] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.895 [2024-11-17 13:16:34.885191] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:45.895 [2024-11-17 13:16:34.885251] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:46.203 13:16:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:46.203 13:16:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@868 -- # return 0 00:06:46.203 13:16:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:06:46.203 13:16:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:46.203 13:16:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:46.203 Base_1 00:06:46.203 13:16:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:46.203 13:16:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:06:46.203 13:16:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:46.203 13:16:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:46.203 Base_2 00:06:46.203 13:16:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:46.203 13:16:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''Base_1 Base_2'\''' -n raid 00:06:46.203 13:16:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:46.203 13:16:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:46.203 [2024-11-17 13:16:35.316477] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:06:46.203 [2024-11-17 13:16:35.318505] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:06:46.203 [2024-11-17 13:16:35.318586] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:46.203 [2024-11-17 13:16:35.318600] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:06:46.203 [2024-11-17 13:16:35.318900] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:46.203 [2024-11-17 13:16:35.319084] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:46.203 [2024-11-17 13:16:35.319101] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:06:46.203 [2024-11-17 13:16:35.319296] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:46.203 13:16:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:46.203 13:16:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:06:46.203 13:16:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:06:46.203 13:16:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:46.203 13:16:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:46.203 13:16:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:46.203 13:16:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:06:46.203 13:16:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:06:46.203 13:16:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:06:46.203 13:16:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:06:46.203 13:16:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:06:46.203 13:16:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:46.203 13:16:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:06:46.203 13:16:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:46.203 13:16:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@12 -- # local i 00:06:46.203 13:16:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:46.203 13:16:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:06:46.203 13:16:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:06:46.462 [2024-11-17 13:16:35.540168] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:06:46.462 /dev/nbd0 00:06:46.462 13:16:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:46.462 13:16:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:46.462 13:16:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:46.462 13:16:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@873 -- # local i 00:06:46.462 13:16:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:46.462 13:16:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:46.462 13:16:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:46.462 13:16:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@877 -- # break 00:06:46.462 13:16:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:46.462 13:16:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:46.462 13:16:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:46.462 1+0 records in 00:06:46.462 1+0 records out 00:06:46.462 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000285163 s, 14.4 MB/s 00:06:46.462 13:16:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:46.462 13:16:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # size=4096 00:06:46.463 13:16:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:46.463 13:16:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:46.463 13:16:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@893 -- # return 0 00:06:46.463 13:16:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:46.463 13:16:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:06:46.463 13:16:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:06:46.463 13:16:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:06:46.463 13:16:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:06:46.722 13:16:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:46.722 { 00:06:46.722 "nbd_device": "/dev/nbd0", 00:06:46.722 "bdev_name": "raid" 00:06:46.722 } 00:06:46.722 ]' 00:06:46.722 13:16:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:46.722 { 00:06:46.722 "nbd_device": "/dev/nbd0", 00:06:46.722 "bdev_name": "raid" 00:06:46.722 } 00:06:46.722 ]' 00:06:46.722 13:16:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:46.722 13:16:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:06:46.722 13:16:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:06:46.722 13:16:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:46.722 13:16:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=1 00:06:46.722 13:16:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 1 00:06:46.722 13:16:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # count=1 00:06:46.722 13:16:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:06:46.722 13:16:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:06:46.722 13:16:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:06:46.722 13:16:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:06:46.722 13:16:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@19 -- # local blksize 00:06:46.722 13:16:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:06:46.722 13:16:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:06:46.722 13:16:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:06:46.722 13:16:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # blksize=512 00:06:46.722 13:16:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:06:46.722 13:16:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:06:46.722 13:16:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:06:46.722 13:16:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:06:46.722 13:16:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:06:46.722 13:16:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:06:46.722 13:16:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:06:46.722 13:16:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:06:46.722 13:16:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:06:46.722 4096+0 records in 00:06:46.722 4096+0 records out 00:06:46.722 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0320659 s, 65.4 MB/s 00:06:46.722 13:16:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:06:46.981 4096+0 records in 00:06:46.981 4096+0 records out 00:06:46.981 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.194202 s, 10.8 MB/s 00:06:46.981 13:16:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:06:46.981 13:16:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:46.981 13:16:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:06:46.981 13:16:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:46.981 13:16:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:06:46.981 13:16:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:06:46.981 13:16:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:06:46.981 128+0 records in 00:06:46.981 128+0 records out 00:06:46.981 65536 bytes (66 kB, 64 KiB) copied, 0.0012339 s, 53.1 MB/s 00:06:46.981 13:16:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:06:46.981 13:16:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:46.981 13:16:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:46.981 13:16:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:46.981 13:16:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:46.981 13:16:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:06:46.981 13:16:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:06:46.981 13:16:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:06:46.981 2035+0 records in 00:06:46.981 2035+0 records out 00:06:46.981 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.012225 s, 85.2 MB/s 00:06:46.981 13:16:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:06:46.981 13:16:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:46.981 13:16:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:46.981 13:16:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:46.981 13:16:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:46.981 13:16:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:06:46.981 13:16:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:06:46.981 13:16:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:06:46.981 456+0 records in 00:06:46.981 456+0 records out 00:06:46.981 233472 bytes (233 kB, 228 KiB) copied, 0.00339828 s, 68.7 MB/s 00:06:47.240 13:16:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:06:47.240 13:16:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:47.240 13:16:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:47.240 13:16:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:47.240 13:16:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:47.240 13:16:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@52 -- # return 0 00:06:47.240 13:16:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:06:47.240 13:16:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:06:47.240 13:16:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:06:47.240 13:16:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:47.240 13:16:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@51 -- # local i 00:06:47.240 13:16:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:47.240 13:16:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:06:47.240 13:16:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:47.240 [2024-11-17 13:16:36.441118] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:47.240 13:16:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:47.240 13:16:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:47.240 13:16:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:47.240 13:16:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:47.240 13:16:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:47.240 13:16:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@41 -- # break 00:06:47.240 13:16:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@45 -- # return 0 00:06:47.240 13:16:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:06:47.240 13:16:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:06:47.240 13:16:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:06:47.499 13:16:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:47.499 13:16:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:47.499 13:16:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:47.499 13:16:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:47.499 13:16:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:47.499 13:16:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:47.499 13:16:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # true 00:06:47.499 13:16:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=0 00:06:47.499 13:16:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:47.499 13:16:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # count=0 00:06:47.499 13:16:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:06:47.499 13:16:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # killprocess 60307 00:06:47.499 13:16:36 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@954 -- # '[' -z 60307 ']' 00:06:47.499 13:16:36 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@958 -- # kill -0 60307 00:06:47.499 13:16:36 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@959 -- # uname 00:06:47.499 13:16:36 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:47.757 13:16:36 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60307 00:06:47.757 13:16:36 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:47.757 13:16:36 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:47.757 killing process with pid 60307 00:06:47.757 13:16:36 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60307' 00:06:47.757 13:16:36 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@973 -- # kill 60307 00:06:47.757 [2024-11-17 13:16:36.757886] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:47.757 [2024-11-17 13:16:36.757995] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:47.757 [2024-11-17 13:16:36.758054] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:47.757 13:16:36 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@978 -- # wait 60307 00:06:47.758 [2024-11-17 13:16:36.758070] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:06:47.758 [2024-11-17 13:16:36.965325] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:49.140 13:16:38 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@99 -- # return 0 00:06:49.140 00:06:49.140 real 0m3.764s 00:06:49.140 user 0m4.395s 00:06:49.140 sys 0m0.919s 00:06:49.140 13:16:38 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:49.140 13:16:38 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:49.140 ************************************ 00:06:49.140 END TEST raid_function_test_raid0 00:06:49.140 ************************************ 00:06:49.140 13:16:38 bdev_raid -- bdev/bdev_raid.sh@960 -- # run_test raid_function_test_concat raid_function_test concat 00:06:49.140 13:16:38 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:49.140 13:16:38 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:49.140 13:16:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:49.140 ************************************ 00:06:49.140 START TEST raid_function_test_concat 00:06:49.140 ************************************ 00:06:49.140 13:16:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1129 -- # raid_function_test concat 00:06:49.140 13:16:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@64 -- # local raid_level=concat 00:06:49.140 13:16:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:06:49.140 13:16:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:06:49.140 13:16:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@69 -- # raid_pid=60431 00:06:49.140 13:16:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:49.140 Process raid pid: 60431 00:06:49.140 13:16:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60431' 00:06:49.140 13:16:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@71 -- # waitforlisten 60431 00:06:49.140 13:16:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@835 -- # '[' -z 60431 ']' 00:06:49.140 13:16:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:49.140 13:16:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:49.140 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:49.140 13:16:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:49.140 13:16:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:49.140 13:16:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:06:49.140 [2024-11-17 13:16:38.205370] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:06:49.140 [2024-11-17 13:16:38.205503] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:49.400 [2024-11-17 13:16:38.378352] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.400 [2024-11-17 13:16:38.487093] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.660 [2024-11-17 13:16:38.684845] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:49.660 [2024-11-17 13:16:38.684876] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:49.920 13:16:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:49.920 13:16:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@868 -- # return 0 00:06:49.920 13:16:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:06:49.920 13:16:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:49.920 13:16:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:06:49.920 Base_1 00:06:49.920 13:16:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:49.920 13:16:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:06:49.920 13:16:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:49.920 13:16:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:06:49.920 Base_2 00:06:49.920 13:16:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:49.920 13:16:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''Base_1 Base_2'\''' -n raid 00:06:49.920 13:16:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:49.920 13:16:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:06:49.920 [2024-11-17 13:16:39.121498] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:06:49.920 [2024-11-17 13:16:39.123284] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:06:49.920 [2024-11-17 13:16:39.123352] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:49.920 [2024-11-17 13:16:39.123364] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:06:49.920 [2024-11-17 13:16:39.123596] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:49.920 [2024-11-17 13:16:39.123773] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:49.920 [2024-11-17 13:16:39.123789] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:06:49.920 [2024-11-17 13:16:39.123943] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:49.920 13:16:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:49.920 13:16:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:06:49.920 13:16:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:06:49.921 13:16:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:49.921 13:16:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:06:50.198 13:16:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:50.198 13:16:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:06:50.198 13:16:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:06:50.198 13:16:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:06:50.198 13:16:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:06:50.198 13:16:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:06:50.198 13:16:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:50.199 13:16:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:06:50.199 13:16:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:50.199 13:16:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@12 -- # local i 00:06:50.199 13:16:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:50.199 13:16:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:06:50.199 13:16:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:06:50.199 [2024-11-17 13:16:39.365130] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:06:50.199 /dev/nbd0 00:06:50.199 13:16:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:50.199 13:16:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:50.199 13:16:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:50.199 13:16:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@873 -- # local i 00:06:50.199 13:16:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:50.199 13:16:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:50.199 13:16:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:50.199 13:16:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@877 -- # break 00:06:50.199 13:16:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:50.199 13:16:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:50.199 13:16:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:50.199 1+0 records in 00:06:50.199 1+0 records out 00:06:50.199 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000397286 s, 10.3 MB/s 00:06:50.199 13:16:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:50.199 13:16:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # size=4096 00:06:50.199 13:16:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:50.463 13:16:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:50.463 13:16:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@893 -- # return 0 00:06:50.463 13:16:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:50.463 13:16:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:06:50.463 13:16:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:06:50.463 13:16:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:06:50.463 13:16:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:06:50.463 13:16:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:50.463 { 00:06:50.463 "nbd_device": "/dev/nbd0", 00:06:50.463 "bdev_name": "raid" 00:06:50.463 } 00:06:50.463 ]' 00:06:50.463 13:16:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:50.463 { 00:06:50.463 "nbd_device": "/dev/nbd0", 00:06:50.463 "bdev_name": "raid" 00:06:50.463 } 00:06:50.463 ]' 00:06:50.463 13:16:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:50.723 13:16:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:06:50.723 13:16:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:06:50.723 13:16:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:50.723 13:16:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=1 00:06:50.723 13:16:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 1 00:06:50.723 13:16:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # count=1 00:06:50.723 13:16:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:06:50.723 13:16:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:06:50.723 13:16:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:06:50.723 13:16:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:06:50.723 13:16:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@19 -- # local blksize 00:06:50.723 13:16:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:06:50.723 13:16:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:06:50.723 13:16:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:06:50.723 13:16:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # blksize=512 00:06:50.723 13:16:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:06:50.723 13:16:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:06:50.723 13:16:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:06:50.723 13:16:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:06:50.723 13:16:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:06:50.723 13:16:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:06:50.723 13:16:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:06:50.723 13:16:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:06:50.723 13:16:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:06:50.723 4096+0 records in 00:06:50.723 4096+0 records out 00:06:50.723 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0318441 s, 65.9 MB/s 00:06:50.723 13:16:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:06:51.005 4096+0 records in 00:06:51.005 4096+0 records out 00:06:51.005 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.183747 s, 11.4 MB/s 00:06:51.005 13:16:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:06:51.005 13:16:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:51.005 13:16:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:06:51.005 13:16:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:51.005 13:16:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:06:51.005 13:16:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:06:51.005 13:16:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:06:51.005 128+0 records in 00:06:51.005 128+0 records out 00:06:51.005 65536 bytes (66 kB, 64 KiB) copied, 0.00102768 s, 63.8 MB/s 00:06:51.005 13:16:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:06:51.005 13:16:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:51.005 13:16:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:51.005 13:16:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:51.005 13:16:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:51.005 13:16:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:06:51.005 13:16:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:06:51.005 13:16:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:06:51.005 2035+0 records in 00:06:51.005 2035+0 records out 00:06:51.005 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0140394 s, 74.2 MB/s 00:06:51.005 13:16:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:06:51.005 13:16:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:51.005 13:16:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:51.005 13:16:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:51.005 13:16:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:51.005 13:16:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:06:51.005 13:16:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:06:51.005 13:16:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:06:51.005 456+0 records in 00:06:51.005 456+0 records out 00:06:51.005 233472 bytes (233 kB, 228 KiB) copied, 0.00357736 s, 65.3 MB/s 00:06:51.005 13:16:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:06:51.005 13:16:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:51.005 13:16:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:51.005 13:16:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:51.005 13:16:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:51.005 13:16:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@52 -- # return 0 00:06:51.005 13:16:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:06:51.005 13:16:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:06:51.005 13:16:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:06:51.005 13:16:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:51.005 13:16:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@51 -- # local i 00:06:51.005 13:16:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:51.005 13:16:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:06:51.314 13:16:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:51.314 [2024-11-17 13:16:40.287423] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:51.314 13:16:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:51.314 13:16:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:51.314 13:16:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:51.314 13:16:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:51.314 13:16:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:51.314 13:16:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@41 -- # break 00:06:51.314 13:16:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@45 -- # return 0 00:06:51.314 13:16:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:06:51.314 13:16:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:06:51.314 13:16:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:06:51.314 13:16:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:51.314 13:16:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:51.314 13:16:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:51.573 13:16:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:51.573 13:16:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:51.573 13:16:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:51.573 13:16:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # true 00:06:51.573 13:16:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=0 00:06:51.573 13:16:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:51.573 13:16:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # count=0 00:06:51.573 13:16:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:06:51.573 13:16:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # killprocess 60431 00:06:51.573 13:16:40 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@954 -- # '[' -z 60431 ']' 00:06:51.573 13:16:40 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@958 -- # kill -0 60431 00:06:51.573 13:16:40 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@959 -- # uname 00:06:51.573 13:16:40 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:51.573 13:16:40 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60431 00:06:51.573 13:16:40 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:51.573 13:16:40 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:51.573 killing process with pid 60431 00:06:51.573 13:16:40 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60431' 00:06:51.573 13:16:40 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@973 -- # kill 60431 00:06:51.573 [2024-11-17 13:16:40.599888] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:51.573 [2024-11-17 13:16:40.599999] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:51.573 13:16:40 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@978 -- # wait 60431 00:06:51.573 [2024-11-17 13:16:40.600070] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:51.573 [2024-11-17 13:16:40.600082] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:06:51.832 [2024-11-17 13:16:40.811577] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:52.769 13:16:41 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@99 -- # return 0 00:06:52.769 00:06:52.769 real 0m3.780s 00:06:52.769 user 0m4.437s 00:06:52.769 sys 0m0.906s 00:06:52.769 13:16:41 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:52.769 13:16:41 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:06:52.769 ************************************ 00:06:52.769 END TEST raid_function_test_concat 00:06:52.769 ************************************ 00:06:52.769 13:16:41 bdev_raid -- bdev/bdev_raid.sh@963 -- # run_test raid0_resize_test raid_resize_test 0 00:06:52.769 13:16:41 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:52.769 13:16:41 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:52.769 13:16:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:52.769 ************************************ 00:06:52.769 START TEST raid0_resize_test 00:06:52.769 ************************************ 00:06:52.769 13:16:41 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1129 -- # raid_resize_test 0 00:06:52.769 13:16:41 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=0 00:06:52.769 13:16:41 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:06:52.769 13:16:41 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:06:52.769 13:16:41 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:06:52.769 13:16:41 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:06:52.769 13:16:41 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:06:52.769 13:16:41 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:06:52.769 13:16:41 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:06:52.769 13:16:41 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=60558 00:06:52.769 Process raid pid: 60558 00:06:52.769 13:16:41 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 60558' 00:06:52.769 13:16:41 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:52.769 13:16:41 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 60558 00:06:52.769 13:16:41 bdev_raid.raid0_resize_test -- common/autotest_common.sh@835 -- # '[' -z 60558 ']' 00:06:52.769 13:16:41 bdev_raid.raid0_resize_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:52.769 13:16:41 bdev_raid.raid0_resize_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:52.769 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:52.769 13:16:41 bdev_raid.raid0_resize_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:52.769 13:16:41 bdev_raid.raid0_resize_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:52.769 13:16:41 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:53.028 [2024-11-17 13:16:42.044163] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:06:53.028 [2024-11-17 13:16:42.044304] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:53.028 [2024-11-17 13:16:42.217801] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.287 [2024-11-17 13:16:42.327684] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.546 [2024-11-17 13:16:42.533367] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:53.546 [2024-11-17 13:16:42.533406] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:53.805 13:16:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:53.805 13:16:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@868 -- # return 0 00:06:53.805 13:16:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:06:53.805 13:16:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:53.805 13:16:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:53.805 Base_1 00:06:53.805 13:16:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:53.805 13:16:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:06:53.805 13:16:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:53.805 13:16:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:53.805 Base_2 00:06:53.805 13:16:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:53.805 13:16:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 0 -eq 0 ']' 00:06:53.805 13:16:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@350 -- # rpc_cmd bdev_raid_create -z 64 -r 0 -b ''\''Base_1 Base_2'\''' -n Raid 00:06:53.805 13:16:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:53.805 13:16:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:53.805 [2024-11-17 13:16:42.894971] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:06:53.805 [2024-11-17 13:16:42.896725] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:06:53.805 [2024-11-17 13:16:42.896784] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:53.805 [2024-11-17 13:16:42.896796] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:06:53.805 [2024-11-17 13:16:42.897033] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:06:53.805 [2024-11-17 13:16:42.897176] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:53.805 [2024-11-17 13:16:42.897193] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:06:53.805 [2024-11-17 13:16:42.897348] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:53.805 13:16:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:53.805 13:16:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:06:53.805 13:16:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:53.805 13:16:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:53.805 [2024-11-17 13:16:42.902934] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:53.805 [2024-11-17 13:16:42.902961] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:06:53.805 true 00:06:53.806 13:16:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:53.806 13:16:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:53.806 13:16:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:06:53.806 13:16:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:53.806 13:16:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:53.806 [2024-11-17 13:16:42.919070] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:53.806 13:16:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:53.806 13:16:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=131072 00:06:53.806 13:16:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=64 00:06:53.806 13:16:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 0 -eq 0 ']' 00:06:53.806 13:16:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@362 -- # expected_size=64 00:06:53.806 13:16:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 64 '!=' 64 ']' 00:06:53.806 13:16:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:06:53.806 13:16:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:53.806 13:16:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:53.806 [2024-11-17 13:16:42.962813] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:53.806 [2024-11-17 13:16:42.962839] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:06:53.806 [2024-11-17 13:16:42.962864] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:06:53.806 true 00:06:53.806 13:16:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:53.806 13:16:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:06:53.806 13:16:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:53.806 13:16:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:53.806 13:16:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:53.806 [2024-11-17 13:16:42.978965] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:53.806 13:16:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:53.806 13:16:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=262144 00:06:53.806 13:16:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=128 00:06:53.806 13:16:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 0 -eq 0 ']' 00:06:53.806 13:16:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@378 -- # expected_size=128 00:06:53.806 13:16:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 128 '!=' 128 ']' 00:06:53.806 13:16:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 60558 00:06:53.806 13:16:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@954 -- # '[' -z 60558 ']' 00:06:53.806 13:16:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@958 -- # kill -0 60558 00:06:53.806 13:16:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@959 -- # uname 00:06:53.806 13:16:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:53.806 13:16:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60558 00:06:54.065 13:16:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:54.065 13:16:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:54.065 killing process with pid 60558 00:06:54.065 13:16:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60558' 00:06:54.065 13:16:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@973 -- # kill 60558 00:06:54.065 [2024-11-17 13:16:43.033425] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:54.065 [2024-11-17 13:16:43.033511] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:54.065 [2024-11-17 13:16:43.033564] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:54.065 [2024-11-17 13:16:43.033574] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:06:54.065 13:16:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@978 -- # wait 60558 00:06:54.065 [2024-11-17 13:16:43.050624] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:55.002 13:16:44 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:06:55.002 00:06:55.002 real 0m2.185s 00:06:55.002 user 0m2.307s 00:06:55.002 sys 0m0.324s 00:06:55.002 13:16:44 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:55.002 13:16:44 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.002 ************************************ 00:06:55.002 END TEST raid0_resize_test 00:06:55.002 ************************************ 00:06:55.002 13:16:44 bdev_raid -- bdev/bdev_raid.sh@964 -- # run_test raid1_resize_test raid_resize_test 1 00:06:55.002 13:16:44 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:55.002 13:16:44 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:55.002 13:16:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:55.002 ************************************ 00:06:55.002 START TEST raid1_resize_test 00:06:55.002 ************************************ 00:06:55.002 13:16:44 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1129 -- # raid_resize_test 1 00:06:55.002 13:16:44 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=1 00:06:55.002 13:16:44 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:06:55.002 13:16:44 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:06:55.002 13:16:44 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:06:55.002 13:16:44 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:06:55.002 13:16:44 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:06:55.002 13:16:44 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:06:55.002 13:16:44 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:06:55.002 13:16:44 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=60614 00:06:55.002 13:16:44 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 60614' 00:06:55.002 Process raid pid: 60614 00:06:55.002 13:16:44 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:55.002 13:16:44 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 60614 00:06:55.002 13:16:44 bdev_raid.raid1_resize_test -- common/autotest_common.sh@835 -- # '[' -z 60614 ']' 00:06:55.002 13:16:44 bdev_raid.raid1_resize_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:55.002 13:16:44 bdev_raid.raid1_resize_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:55.002 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:55.002 13:16:44 bdev_raid.raid1_resize_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:55.002 13:16:44 bdev_raid.raid1_resize_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:55.002 13:16:44 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.261 [2024-11-17 13:16:44.300518] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:06:55.261 [2024-11-17 13:16:44.300633] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:55.261 [2024-11-17 13:16:44.457655] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.520 [2024-11-17 13:16:44.568966] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.780 [2024-11-17 13:16:44.772045] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:55.780 [2024-11-17 13:16:44.772079] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:56.039 13:16:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:56.039 13:16:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@868 -- # return 0 00:06:56.039 13:16:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:06:56.039 13:16:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.039 13:16:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:56.039 Base_1 00:06:56.039 13:16:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.039 13:16:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:06:56.039 13:16:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.039 13:16:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:56.039 Base_2 00:06:56.039 13:16:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.039 13:16:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 1 -eq 0 ']' 00:06:56.039 13:16:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@352 -- # rpc_cmd bdev_raid_create -r 1 -b ''\''Base_1 Base_2'\''' -n Raid 00:06:56.039 13:16:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.039 13:16:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:56.039 [2024-11-17 13:16:45.145911] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:06:56.039 [2024-11-17 13:16:45.147713] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:06:56.039 [2024-11-17 13:16:45.147772] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:56.039 [2024-11-17 13:16:45.147784] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:06:56.039 [2024-11-17 13:16:45.148011] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:06:56.039 [2024-11-17 13:16:45.148153] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:56.039 [2024-11-17 13:16:45.148171] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:06:56.039 [2024-11-17 13:16:45.148355] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:56.039 13:16:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.039 13:16:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:06:56.039 13:16:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.039 13:16:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:56.039 [2024-11-17 13:16:45.157874] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:56.039 [2024-11-17 13:16:45.157905] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:06:56.039 true 00:06:56.039 13:16:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.039 13:16:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:56.039 13:16:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:06:56.039 13:16:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.039 13:16:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:56.039 [2024-11-17 13:16:45.174010] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:56.039 13:16:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.039 13:16:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=65536 00:06:56.039 13:16:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=32 00:06:56.039 13:16:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 1 -eq 0 ']' 00:06:56.039 13:16:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@364 -- # expected_size=32 00:06:56.039 13:16:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 32 '!=' 32 ']' 00:06:56.039 13:16:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:06:56.039 13:16:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.039 13:16:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:56.039 [2024-11-17 13:16:45.221743] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:56.039 [2024-11-17 13:16:45.221817] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:06:56.039 [2024-11-17 13:16:45.221864] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 65536 to 131072 00:06:56.039 true 00:06:56.039 13:16:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.039 13:16:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:56.039 13:16:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.040 13:16:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:56.040 13:16:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:06:56.040 [2024-11-17 13:16:45.233910] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:56.040 13:16:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.299 13:16:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=131072 00:06:56.299 13:16:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=64 00:06:56.299 13:16:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 1 -eq 0 ']' 00:06:56.299 13:16:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@380 -- # expected_size=64 00:06:56.299 13:16:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 64 '!=' 64 ']' 00:06:56.299 13:16:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 60614 00:06:56.299 13:16:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@954 -- # '[' -z 60614 ']' 00:06:56.299 13:16:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@958 -- # kill -0 60614 00:06:56.299 13:16:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@959 -- # uname 00:06:56.299 13:16:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:56.299 13:16:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60614 00:06:56.299 13:16:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:56.299 13:16:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:56.299 13:16:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60614' 00:06:56.299 killing process with pid 60614 00:06:56.299 13:16:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@973 -- # kill 60614 00:06:56.299 [2024-11-17 13:16:45.322834] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:56.299 [2024-11-17 13:16:45.322960] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:56.299 13:16:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@978 -- # wait 60614 00:06:56.299 [2024-11-17 13:16:45.323473] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:56.299 [2024-11-17 13:16:45.323543] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:06:56.299 [2024-11-17 13:16:45.340898] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:57.259 13:16:46 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:06:57.259 00:06:57.259 real 0m2.226s 00:06:57.259 user 0m2.349s 00:06:57.259 sys 0m0.344s 00:06:57.259 13:16:46 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:57.259 13:16:46 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.259 ************************************ 00:06:57.259 END TEST raid1_resize_test 00:06:57.259 ************************************ 00:06:57.518 13:16:46 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:06:57.518 13:16:46 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:06:57.518 13:16:46 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:06:57.518 13:16:46 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:06:57.518 13:16:46 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:57.518 13:16:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:57.518 ************************************ 00:06:57.518 START TEST raid_state_function_test 00:06:57.518 ************************************ 00:06:57.518 13:16:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 2 false 00:06:57.518 13:16:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:06:57.518 13:16:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:06:57.518 13:16:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:06:57.518 13:16:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:06:57.518 13:16:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:06:57.518 13:16:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:57.518 13:16:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:06:57.518 13:16:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:06:57.518 13:16:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:57.518 13:16:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:06:57.518 13:16:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:06:57.518 13:16:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:57.518 13:16:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:06:57.518 13:16:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:06:57.518 13:16:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:06:57.518 13:16:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:06:57.518 13:16:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:06:57.518 13:16:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:06:57.518 13:16:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:06:57.518 13:16:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:06:57.518 13:16:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:06:57.518 13:16:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:06:57.518 13:16:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:06:57.518 13:16:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=60671 00:06:57.518 13:16:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:57.518 Process raid pid: 60671 00:06:57.518 13:16:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 60671' 00:06:57.518 13:16:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 60671 00:06:57.518 13:16:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 60671 ']' 00:06:57.518 13:16:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:57.518 13:16:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:57.518 13:16:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:57.518 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:57.518 13:16:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:57.518 13:16:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.518 [2024-11-17 13:16:46.598806] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:06:57.518 [2024-11-17 13:16:46.599015] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:57.778 [2024-11-17 13:16:46.769991] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.778 [2024-11-17 13:16:46.885824] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.037 [2024-11-17 13:16:47.091265] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:58.037 [2024-11-17 13:16:47.091343] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:58.296 13:16:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:58.296 13:16:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:06:58.296 13:16:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:06:58.296 13:16:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:58.296 13:16:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:58.296 [2024-11-17 13:16:47.440040] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:06:58.296 [2024-11-17 13:16:47.440152] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:06:58.296 [2024-11-17 13:16:47.440181] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:58.296 [2024-11-17 13:16:47.440204] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:58.296 13:16:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:58.296 13:16:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:06:58.296 13:16:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:58.296 13:16:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:58.296 13:16:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:58.296 13:16:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:58.296 13:16:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:58.296 13:16:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:58.296 13:16:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:58.296 13:16:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:58.296 13:16:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:58.296 13:16:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:58.296 13:16:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:58.296 13:16:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:58.296 13:16:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:58.296 13:16:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:58.296 13:16:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:58.296 "name": "Existed_Raid", 00:06:58.296 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:58.296 "strip_size_kb": 64, 00:06:58.296 "state": "configuring", 00:06:58.296 "raid_level": "raid0", 00:06:58.296 "superblock": false, 00:06:58.296 "num_base_bdevs": 2, 00:06:58.296 "num_base_bdevs_discovered": 0, 00:06:58.296 "num_base_bdevs_operational": 2, 00:06:58.296 "base_bdevs_list": [ 00:06:58.296 { 00:06:58.296 "name": "BaseBdev1", 00:06:58.296 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:58.296 "is_configured": false, 00:06:58.296 "data_offset": 0, 00:06:58.296 "data_size": 0 00:06:58.296 }, 00:06:58.296 { 00:06:58.296 "name": "BaseBdev2", 00:06:58.296 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:58.296 "is_configured": false, 00:06:58.296 "data_offset": 0, 00:06:58.296 "data_size": 0 00:06:58.296 } 00:06:58.296 ] 00:06:58.296 }' 00:06:58.296 13:16:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:58.296 13:16:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:58.864 13:16:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:06:58.864 13:16:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:58.864 13:16:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:58.864 [2024-11-17 13:16:47.847316] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:06:58.864 [2024-11-17 13:16:47.847413] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:06:58.864 13:16:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:58.864 13:16:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:06:58.864 13:16:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:58.864 13:16:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:58.864 [2024-11-17 13:16:47.855291] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:06:58.864 [2024-11-17 13:16:47.855329] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:06:58.864 [2024-11-17 13:16:47.855338] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:58.864 [2024-11-17 13:16:47.855349] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:58.864 13:16:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:58.864 13:16:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:06:58.864 13:16:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:58.864 13:16:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:58.864 [2024-11-17 13:16:47.900459] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:58.864 BaseBdev1 00:06:58.864 13:16:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:58.864 13:16:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:06:58.864 13:16:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:06:58.864 13:16:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:06:58.864 13:16:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:06:58.864 13:16:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:06:58.864 13:16:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:06:58.864 13:16:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:06:58.864 13:16:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:58.864 13:16:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:58.864 13:16:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:58.864 13:16:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:06:58.864 13:16:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:58.864 13:16:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:58.864 [ 00:06:58.864 { 00:06:58.864 "name": "BaseBdev1", 00:06:58.864 "aliases": [ 00:06:58.864 "96bba56a-621f-46c8-8f5f-d2b26f51ce8c" 00:06:58.864 ], 00:06:58.864 "product_name": "Malloc disk", 00:06:58.864 "block_size": 512, 00:06:58.864 "num_blocks": 65536, 00:06:58.864 "uuid": "96bba56a-621f-46c8-8f5f-d2b26f51ce8c", 00:06:58.864 "assigned_rate_limits": { 00:06:58.864 "rw_ios_per_sec": 0, 00:06:58.864 "rw_mbytes_per_sec": 0, 00:06:58.864 "r_mbytes_per_sec": 0, 00:06:58.864 "w_mbytes_per_sec": 0 00:06:58.864 }, 00:06:58.864 "claimed": true, 00:06:58.864 "claim_type": "exclusive_write", 00:06:58.864 "zoned": false, 00:06:58.864 "supported_io_types": { 00:06:58.864 "read": true, 00:06:58.864 "write": true, 00:06:58.864 "unmap": true, 00:06:58.864 "flush": true, 00:06:58.864 "reset": true, 00:06:58.864 "nvme_admin": false, 00:06:58.864 "nvme_io": false, 00:06:58.864 "nvme_io_md": false, 00:06:58.864 "write_zeroes": true, 00:06:58.864 "zcopy": true, 00:06:58.864 "get_zone_info": false, 00:06:58.864 "zone_management": false, 00:06:58.864 "zone_append": false, 00:06:58.864 "compare": false, 00:06:58.864 "compare_and_write": false, 00:06:58.864 "abort": true, 00:06:58.864 "seek_hole": false, 00:06:58.864 "seek_data": false, 00:06:58.864 "copy": true, 00:06:58.864 "nvme_iov_md": false 00:06:58.864 }, 00:06:58.864 "memory_domains": [ 00:06:58.864 { 00:06:58.864 "dma_device_id": "system", 00:06:58.864 "dma_device_type": 1 00:06:58.864 }, 00:06:58.864 { 00:06:58.864 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:58.864 "dma_device_type": 2 00:06:58.864 } 00:06:58.864 ], 00:06:58.864 "driver_specific": {} 00:06:58.864 } 00:06:58.864 ] 00:06:58.864 13:16:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:58.864 13:16:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:06:58.864 13:16:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:06:58.864 13:16:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:58.864 13:16:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:58.864 13:16:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:58.864 13:16:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:58.864 13:16:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:58.864 13:16:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:58.864 13:16:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:58.864 13:16:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:58.864 13:16:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:58.864 13:16:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:58.864 13:16:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:58.864 13:16:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:58.864 13:16:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:58.864 13:16:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:58.864 13:16:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:58.864 "name": "Existed_Raid", 00:06:58.864 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:58.864 "strip_size_kb": 64, 00:06:58.864 "state": "configuring", 00:06:58.864 "raid_level": "raid0", 00:06:58.864 "superblock": false, 00:06:58.864 "num_base_bdevs": 2, 00:06:58.864 "num_base_bdevs_discovered": 1, 00:06:58.864 "num_base_bdevs_operational": 2, 00:06:58.864 "base_bdevs_list": [ 00:06:58.864 { 00:06:58.864 "name": "BaseBdev1", 00:06:58.864 "uuid": "96bba56a-621f-46c8-8f5f-d2b26f51ce8c", 00:06:58.864 "is_configured": true, 00:06:58.864 "data_offset": 0, 00:06:58.864 "data_size": 65536 00:06:58.864 }, 00:06:58.864 { 00:06:58.864 "name": "BaseBdev2", 00:06:58.864 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:58.864 "is_configured": false, 00:06:58.864 "data_offset": 0, 00:06:58.864 "data_size": 0 00:06:58.864 } 00:06:58.864 ] 00:06:58.864 }' 00:06:58.864 13:16:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:58.864 13:16:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.123 13:16:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:06:59.123 13:16:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:59.123 13:16:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.124 [2024-11-17 13:16:48.327749] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:06:59.124 [2024-11-17 13:16:48.327857] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:06:59.124 13:16:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:59.124 13:16:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:06:59.124 13:16:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:59.124 13:16:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.124 [2024-11-17 13:16:48.339772] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:59.124 [2024-11-17 13:16:48.341662] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:59.124 [2024-11-17 13:16:48.341702] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:59.383 13:16:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:59.383 13:16:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:06:59.383 13:16:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:06:59.383 13:16:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:06:59.383 13:16:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:59.383 13:16:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:59.383 13:16:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:59.383 13:16:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:59.383 13:16:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:59.383 13:16:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:59.383 13:16:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:59.383 13:16:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:59.383 13:16:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:59.383 13:16:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:59.383 13:16:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:59.383 13:16:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:59.383 13:16:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.383 13:16:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:59.383 13:16:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:59.383 "name": "Existed_Raid", 00:06:59.383 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:59.383 "strip_size_kb": 64, 00:06:59.383 "state": "configuring", 00:06:59.383 "raid_level": "raid0", 00:06:59.383 "superblock": false, 00:06:59.383 "num_base_bdevs": 2, 00:06:59.383 "num_base_bdevs_discovered": 1, 00:06:59.383 "num_base_bdevs_operational": 2, 00:06:59.383 "base_bdevs_list": [ 00:06:59.383 { 00:06:59.383 "name": "BaseBdev1", 00:06:59.383 "uuid": "96bba56a-621f-46c8-8f5f-d2b26f51ce8c", 00:06:59.383 "is_configured": true, 00:06:59.383 "data_offset": 0, 00:06:59.383 "data_size": 65536 00:06:59.383 }, 00:06:59.383 { 00:06:59.383 "name": "BaseBdev2", 00:06:59.383 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:59.383 "is_configured": false, 00:06:59.383 "data_offset": 0, 00:06:59.383 "data_size": 0 00:06:59.383 } 00:06:59.383 ] 00:06:59.383 }' 00:06:59.383 13:16:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:59.383 13:16:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.643 13:16:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:06:59.643 13:16:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:59.643 13:16:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.643 [2024-11-17 13:16:48.772504] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:06:59.643 [2024-11-17 13:16:48.772615] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:06:59.643 [2024-11-17 13:16:48.772642] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:06:59.643 [2024-11-17 13:16:48.773006] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:59.643 [2024-11-17 13:16:48.773230] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:06:59.643 [2024-11-17 13:16:48.773281] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:06:59.643 [2024-11-17 13:16:48.773608] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:59.643 BaseBdev2 00:06:59.643 13:16:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:59.643 13:16:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:06:59.643 13:16:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:06:59.643 13:16:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:06:59.643 13:16:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:06:59.643 13:16:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:06:59.643 13:16:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:06:59.643 13:16:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:06:59.643 13:16:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:59.643 13:16:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.643 13:16:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:59.643 13:16:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:06:59.643 13:16:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:59.643 13:16:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.644 [ 00:06:59.644 { 00:06:59.644 "name": "BaseBdev2", 00:06:59.644 "aliases": [ 00:06:59.644 "6f7d1d95-cd97-439b-84b2-6623534b1911" 00:06:59.644 ], 00:06:59.644 "product_name": "Malloc disk", 00:06:59.644 "block_size": 512, 00:06:59.644 "num_blocks": 65536, 00:06:59.644 "uuid": "6f7d1d95-cd97-439b-84b2-6623534b1911", 00:06:59.644 "assigned_rate_limits": { 00:06:59.644 "rw_ios_per_sec": 0, 00:06:59.644 "rw_mbytes_per_sec": 0, 00:06:59.644 "r_mbytes_per_sec": 0, 00:06:59.644 "w_mbytes_per_sec": 0 00:06:59.644 }, 00:06:59.644 "claimed": true, 00:06:59.644 "claim_type": "exclusive_write", 00:06:59.644 "zoned": false, 00:06:59.644 "supported_io_types": { 00:06:59.644 "read": true, 00:06:59.644 "write": true, 00:06:59.644 "unmap": true, 00:06:59.644 "flush": true, 00:06:59.644 "reset": true, 00:06:59.644 "nvme_admin": false, 00:06:59.644 "nvme_io": false, 00:06:59.644 "nvme_io_md": false, 00:06:59.644 "write_zeroes": true, 00:06:59.644 "zcopy": true, 00:06:59.644 "get_zone_info": false, 00:06:59.644 "zone_management": false, 00:06:59.644 "zone_append": false, 00:06:59.644 "compare": false, 00:06:59.644 "compare_and_write": false, 00:06:59.644 "abort": true, 00:06:59.644 "seek_hole": false, 00:06:59.644 "seek_data": false, 00:06:59.644 "copy": true, 00:06:59.644 "nvme_iov_md": false 00:06:59.644 }, 00:06:59.644 "memory_domains": [ 00:06:59.644 { 00:06:59.644 "dma_device_id": "system", 00:06:59.644 "dma_device_type": 1 00:06:59.644 }, 00:06:59.644 { 00:06:59.644 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:59.644 "dma_device_type": 2 00:06:59.644 } 00:06:59.644 ], 00:06:59.644 "driver_specific": {} 00:06:59.644 } 00:06:59.644 ] 00:06:59.644 13:16:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:59.644 13:16:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:06:59.644 13:16:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:06:59.644 13:16:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:06:59.644 13:16:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:06:59.644 13:16:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:59.644 13:16:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:06:59.644 13:16:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:59.644 13:16:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:59.644 13:16:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:59.644 13:16:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:59.644 13:16:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:59.644 13:16:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:59.644 13:16:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:59.644 13:16:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:59.644 13:16:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:59.644 13:16:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:59.644 13:16:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.644 13:16:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:59.644 13:16:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:59.644 "name": "Existed_Raid", 00:06:59.644 "uuid": "5ec89b68-27b0-423c-b20a-9ac26d5a1b89", 00:06:59.644 "strip_size_kb": 64, 00:06:59.644 "state": "online", 00:06:59.644 "raid_level": "raid0", 00:06:59.644 "superblock": false, 00:06:59.644 "num_base_bdevs": 2, 00:06:59.644 "num_base_bdevs_discovered": 2, 00:06:59.644 "num_base_bdevs_operational": 2, 00:06:59.644 "base_bdevs_list": [ 00:06:59.644 { 00:06:59.644 "name": "BaseBdev1", 00:06:59.644 "uuid": "96bba56a-621f-46c8-8f5f-d2b26f51ce8c", 00:06:59.644 "is_configured": true, 00:06:59.644 "data_offset": 0, 00:06:59.644 "data_size": 65536 00:06:59.644 }, 00:06:59.644 { 00:06:59.644 "name": "BaseBdev2", 00:06:59.644 "uuid": "6f7d1d95-cd97-439b-84b2-6623534b1911", 00:06:59.644 "is_configured": true, 00:06:59.644 "data_offset": 0, 00:06:59.644 "data_size": 65536 00:06:59.644 } 00:06:59.644 ] 00:06:59.644 }' 00:06:59.644 13:16:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:59.644 13:16:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.211 13:16:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:00.211 13:16:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:00.211 13:16:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:00.211 13:16:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:00.211 13:16:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:00.211 13:16:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:00.211 13:16:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:00.211 13:16:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:00.211 13:16:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.211 13:16:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:00.211 [2024-11-17 13:16:49.295945] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:00.211 13:16:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:00.211 13:16:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:00.211 "name": "Existed_Raid", 00:07:00.211 "aliases": [ 00:07:00.211 "5ec89b68-27b0-423c-b20a-9ac26d5a1b89" 00:07:00.211 ], 00:07:00.211 "product_name": "Raid Volume", 00:07:00.211 "block_size": 512, 00:07:00.211 "num_blocks": 131072, 00:07:00.211 "uuid": "5ec89b68-27b0-423c-b20a-9ac26d5a1b89", 00:07:00.211 "assigned_rate_limits": { 00:07:00.211 "rw_ios_per_sec": 0, 00:07:00.211 "rw_mbytes_per_sec": 0, 00:07:00.211 "r_mbytes_per_sec": 0, 00:07:00.211 "w_mbytes_per_sec": 0 00:07:00.211 }, 00:07:00.211 "claimed": false, 00:07:00.211 "zoned": false, 00:07:00.211 "supported_io_types": { 00:07:00.211 "read": true, 00:07:00.211 "write": true, 00:07:00.211 "unmap": true, 00:07:00.211 "flush": true, 00:07:00.211 "reset": true, 00:07:00.211 "nvme_admin": false, 00:07:00.211 "nvme_io": false, 00:07:00.211 "nvme_io_md": false, 00:07:00.211 "write_zeroes": true, 00:07:00.211 "zcopy": false, 00:07:00.211 "get_zone_info": false, 00:07:00.211 "zone_management": false, 00:07:00.211 "zone_append": false, 00:07:00.211 "compare": false, 00:07:00.211 "compare_and_write": false, 00:07:00.211 "abort": false, 00:07:00.211 "seek_hole": false, 00:07:00.211 "seek_data": false, 00:07:00.211 "copy": false, 00:07:00.211 "nvme_iov_md": false 00:07:00.211 }, 00:07:00.211 "memory_domains": [ 00:07:00.211 { 00:07:00.211 "dma_device_id": "system", 00:07:00.211 "dma_device_type": 1 00:07:00.211 }, 00:07:00.211 { 00:07:00.211 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:00.211 "dma_device_type": 2 00:07:00.211 }, 00:07:00.211 { 00:07:00.211 "dma_device_id": "system", 00:07:00.211 "dma_device_type": 1 00:07:00.211 }, 00:07:00.211 { 00:07:00.211 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:00.211 "dma_device_type": 2 00:07:00.211 } 00:07:00.211 ], 00:07:00.211 "driver_specific": { 00:07:00.211 "raid": { 00:07:00.211 "uuid": "5ec89b68-27b0-423c-b20a-9ac26d5a1b89", 00:07:00.211 "strip_size_kb": 64, 00:07:00.211 "state": "online", 00:07:00.211 "raid_level": "raid0", 00:07:00.211 "superblock": false, 00:07:00.211 "num_base_bdevs": 2, 00:07:00.211 "num_base_bdevs_discovered": 2, 00:07:00.211 "num_base_bdevs_operational": 2, 00:07:00.211 "base_bdevs_list": [ 00:07:00.211 { 00:07:00.211 "name": "BaseBdev1", 00:07:00.211 "uuid": "96bba56a-621f-46c8-8f5f-d2b26f51ce8c", 00:07:00.211 "is_configured": true, 00:07:00.211 "data_offset": 0, 00:07:00.211 "data_size": 65536 00:07:00.211 }, 00:07:00.211 { 00:07:00.211 "name": "BaseBdev2", 00:07:00.211 "uuid": "6f7d1d95-cd97-439b-84b2-6623534b1911", 00:07:00.211 "is_configured": true, 00:07:00.211 "data_offset": 0, 00:07:00.211 "data_size": 65536 00:07:00.211 } 00:07:00.211 ] 00:07:00.211 } 00:07:00.211 } 00:07:00.211 }' 00:07:00.211 13:16:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:00.211 13:16:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:00.211 BaseBdev2' 00:07:00.211 13:16:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:00.211 13:16:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:00.211 13:16:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:00.211 13:16:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:00.211 13:16:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:00.211 13:16:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.211 13:16:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:00.211 13:16:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:00.470 13:16:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:00.470 13:16:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:00.470 13:16:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:00.470 13:16:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:00.470 13:16:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:00.470 13:16:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.470 13:16:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:00.470 13:16:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:00.470 13:16:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:00.470 13:16:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:00.470 13:16:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:00.470 13:16:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:00.470 13:16:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.470 [2024-11-17 13:16:49.519324] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:00.470 [2024-11-17 13:16:49.519356] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:00.470 [2024-11-17 13:16:49.519405] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:00.470 13:16:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:00.470 13:16:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:00.470 13:16:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:07:00.470 13:16:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:00.470 13:16:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:00.470 13:16:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:00.470 13:16:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:07:00.470 13:16:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:00.470 13:16:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:00.471 13:16:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:00.471 13:16:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:00.471 13:16:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:00.471 13:16:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:00.471 13:16:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:00.471 13:16:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:00.471 13:16:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:00.471 13:16:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:00.471 13:16:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:00.471 13:16:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:00.471 13:16:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.471 13:16:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:00.471 13:16:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:00.471 "name": "Existed_Raid", 00:07:00.471 "uuid": "5ec89b68-27b0-423c-b20a-9ac26d5a1b89", 00:07:00.471 "strip_size_kb": 64, 00:07:00.471 "state": "offline", 00:07:00.471 "raid_level": "raid0", 00:07:00.471 "superblock": false, 00:07:00.471 "num_base_bdevs": 2, 00:07:00.471 "num_base_bdevs_discovered": 1, 00:07:00.471 "num_base_bdevs_operational": 1, 00:07:00.471 "base_bdevs_list": [ 00:07:00.471 { 00:07:00.471 "name": null, 00:07:00.471 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:00.471 "is_configured": false, 00:07:00.471 "data_offset": 0, 00:07:00.471 "data_size": 65536 00:07:00.471 }, 00:07:00.471 { 00:07:00.471 "name": "BaseBdev2", 00:07:00.471 "uuid": "6f7d1d95-cd97-439b-84b2-6623534b1911", 00:07:00.471 "is_configured": true, 00:07:00.471 "data_offset": 0, 00:07:00.471 "data_size": 65536 00:07:00.471 } 00:07:00.471 ] 00:07:00.471 }' 00:07:00.471 13:16:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:00.471 13:16:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.038 13:16:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:01.038 13:16:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:01.038 13:16:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:01.038 13:16:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.038 13:16:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:01.038 13:16:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.038 13:16:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.038 13:16:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:01.038 13:16:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:01.038 13:16:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:01.038 13:16:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.038 13:16:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.038 [2024-11-17 13:16:50.099806] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:01.038 [2024-11-17 13:16:50.099902] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:01.038 13:16:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.038 13:16:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:01.038 13:16:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:01.038 13:16:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:01.038 13:16:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:01.038 13:16:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.038 13:16:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.038 13:16:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.038 13:16:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:01.038 13:16:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:01.038 13:16:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:01.038 13:16:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 60671 00:07:01.038 13:16:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 60671 ']' 00:07:01.038 13:16:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 60671 00:07:01.038 13:16:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:07:01.038 13:16:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:01.038 13:16:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60671 00:07:01.297 killing process with pid 60671 00:07:01.297 13:16:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:01.297 13:16:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:01.297 13:16:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60671' 00:07:01.297 13:16:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 60671 00:07:01.297 [2024-11-17 13:16:50.283297] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:01.297 13:16:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 60671 00:07:01.297 [2024-11-17 13:16:50.301785] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:02.255 13:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:07:02.255 00:07:02.255 real 0m4.886s 00:07:02.255 user 0m7.040s 00:07:02.255 sys 0m0.762s 00:07:02.255 13:16:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:02.255 13:16:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.255 ************************************ 00:07:02.255 END TEST raid_state_function_test 00:07:02.255 ************************************ 00:07:02.255 13:16:51 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:07:02.255 13:16:51 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:02.255 13:16:51 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:02.255 13:16:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:02.255 ************************************ 00:07:02.255 START TEST raid_state_function_test_sb 00:07:02.255 ************************************ 00:07:02.255 13:16:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 2 true 00:07:02.255 13:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:07:02.255 13:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:02.255 13:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:07:02.255 13:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:02.255 13:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:02.255 13:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:02.255 13:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:02.255 13:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:02.255 13:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:02.255 13:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:02.255 13:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:02.255 13:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:02.255 13:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:02.255 13:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:02.255 13:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:02.255 13:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:02.255 13:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:02.255 13:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:02.256 13:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:07:02.256 13:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:02.256 13:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:02.256 13:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:07:02.256 13:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:07:02.256 13:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=60924 00:07:02.256 13:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:02.256 Process raid pid: 60924 00:07:02.256 13:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 60924' 00:07:02.256 13:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 60924 00:07:02.256 13:16:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 60924 ']' 00:07:02.256 13:16:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:02.256 13:16:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:02.256 13:16:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:02.256 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:02.256 13:16:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:02.256 13:16:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:02.519 [2024-11-17 13:16:51.556087] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:07:02.519 [2024-11-17 13:16:51.556289] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:02.519 [2024-11-17 13:16:51.726655] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.781 [2024-11-17 13:16:51.843118] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.041 [2024-11-17 13:16:52.049825] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:03.041 [2024-11-17 13:16:52.049958] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:03.301 13:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:03.301 13:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:07:03.301 13:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:03.301 13:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.301 13:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:03.301 [2024-11-17 13:16:52.390303] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:03.301 [2024-11-17 13:16:52.390355] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:03.301 [2024-11-17 13:16:52.390367] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:03.301 [2024-11-17 13:16:52.390378] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:03.301 13:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.301 13:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:03.301 13:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:03.301 13:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:03.301 13:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:03.301 13:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:03.301 13:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:03.301 13:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:03.301 13:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:03.301 13:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:03.301 13:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:03.301 13:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:03.301 13:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:03.301 13:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.301 13:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:03.301 13:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.301 13:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:03.301 "name": "Existed_Raid", 00:07:03.301 "uuid": "3a8db996-6809-42b7-aa07-5569da6cf66d", 00:07:03.301 "strip_size_kb": 64, 00:07:03.301 "state": "configuring", 00:07:03.301 "raid_level": "raid0", 00:07:03.301 "superblock": true, 00:07:03.301 "num_base_bdevs": 2, 00:07:03.301 "num_base_bdevs_discovered": 0, 00:07:03.301 "num_base_bdevs_operational": 2, 00:07:03.301 "base_bdevs_list": [ 00:07:03.301 { 00:07:03.301 "name": "BaseBdev1", 00:07:03.301 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:03.301 "is_configured": false, 00:07:03.301 "data_offset": 0, 00:07:03.301 "data_size": 0 00:07:03.301 }, 00:07:03.301 { 00:07:03.301 "name": "BaseBdev2", 00:07:03.301 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:03.301 "is_configured": false, 00:07:03.301 "data_offset": 0, 00:07:03.301 "data_size": 0 00:07:03.301 } 00:07:03.301 ] 00:07:03.301 }' 00:07:03.301 13:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:03.301 13:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:03.872 13:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:03.872 13:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.872 13:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:03.872 [2024-11-17 13:16:52.869412] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:03.872 [2024-11-17 13:16:52.869508] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:03.872 13:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.872 13:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:03.872 13:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.872 13:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:03.872 [2024-11-17 13:16:52.881440] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:03.872 [2024-11-17 13:16:52.881529] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:03.872 [2024-11-17 13:16:52.881561] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:03.872 [2024-11-17 13:16:52.881586] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:03.872 13:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.872 13:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:03.872 13:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.872 13:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:03.872 [2024-11-17 13:16:52.928840] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:03.872 BaseBdev1 00:07:03.872 13:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.872 13:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:03.872 13:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:03.872 13:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:03.872 13:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:03.872 13:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:03.872 13:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:03.872 13:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:03.872 13:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.872 13:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:03.872 13:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.872 13:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:03.872 13:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.872 13:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:03.872 [ 00:07:03.872 { 00:07:03.872 "name": "BaseBdev1", 00:07:03.872 "aliases": [ 00:07:03.872 "68a34c5a-4831-46a1-a138-b84afc20f87f" 00:07:03.872 ], 00:07:03.872 "product_name": "Malloc disk", 00:07:03.872 "block_size": 512, 00:07:03.872 "num_blocks": 65536, 00:07:03.872 "uuid": "68a34c5a-4831-46a1-a138-b84afc20f87f", 00:07:03.872 "assigned_rate_limits": { 00:07:03.872 "rw_ios_per_sec": 0, 00:07:03.872 "rw_mbytes_per_sec": 0, 00:07:03.872 "r_mbytes_per_sec": 0, 00:07:03.872 "w_mbytes_per_sec": 0 00:07:03.872 }, 00:07:03.872 "claimed": true, 00:07:03.872 "claim_type": "exclusive_write", 00:07:03.872 "zoned": false, 00:07:03.872 "supported_io_types": { 00:07:03.872 "read": true, 00:07:03.872 "write": true, 00:07:03.872 "unmap": true, 00:07:03.872 "flush": true, 00:07:03.872 "reset": true, 00:07:03.872 "nvme_admin": false, 00:07:03.872 "nvme_io": false, 00:07:03.872 "nvme_io_md": false, 00:07:03.872 "write_zeroes": true, 00:07:03.872 "zcopy": true, 00:07:03.872 "get_zone_info": false, 00:07:03.872 "zone_management": false, 00:07:03.872 "zone_append": false, 00:07:03.872 "compare": false, 00:07:03.872 "compare_and_write": false, 00:07:03.872 "abort": true, 00:07:03.872 "seek_hole": false, 00:07:03.872 "seek_data": false, 00:07:03.872 "copy": true, 00:07:03.872 "nvme_iov_md": false 00:07:03.872 }, 00:07:03.872 "memory_domains": [ 00:07:03.872 { 00:07:03.872 "dma_device_id": "system", 00:07:03.872 "dma_device_type": 1 00:07:03.872 }, 00:07:03.872 { 00:07:03.872 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:03.872 "dma_device_type": 2 00:07:03.872 } 00:07:03.872 ], 00:07:03.872 "driver_specific": {} 00:07:03.872 } 00:07:03.872 ] 00:07:03.872 13:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.872 13:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:03.872 13:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:03.872 13:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:03.872 13:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:03.872 13:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:03.872 13:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:03.872 13:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:03.872 13:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:03.872 13:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:03.872 13:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:03.872 13:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:03.872 13:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:03.872 13:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:03.872 13:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.872 13:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:03.872 13:16:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.872 13:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:03.872 "name": "Existed_Raid", 00:07:03.872 "uuid": "e5c63103-c77a-4bfb-a945-e190dacc213b", 00:07:03.872 "strip_size_kb": 64, 00:07:03.872 "state": "configuring", 00:07:03.872 "raid_level": "raid0", 00:07:03.872 "superblock": true, 00:07:03.872 "num_base_bdevs": 2, 00:07:03.872 "num_base_bdevs_discovered": 1, 00:07:03.872 "num_base_bdevs_operational": 2, 00:07:03.872 "base_bdevs_list": [ 00:07:03.872 { 00:07:03.872 "name": "BaseBdev1", 00:07:03.872 "uuid": "68a34c5a-4831-46a1-a138-b84afc20f87f", 00:07:03.873 "is_configured": true, 00:07:03.873 "data_offset": 2048, 00:07:03.873 "data_size": 63488 00:07:03.873 }, 00:07:03.873 { 00:07:03.873 "name": "BaseBdev2", 00:07:03.873 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:03.873 "is_configured": false, 00:07:03.873 "data_offset": 0, 00:07:03.873 "data_size": 0 00:07:03.873 } 00:07:03.873 ] 00:07:03.873 }' 00:07:03.873 13:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:03.873 13:16:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:04.443 13:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:04.443 13:16:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.443 13:16:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:04.443 [2024-11-17 13:16:53.388142] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:04.443 [2024-11-17 13:16:53.388197] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:04.443 13:16:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:04.443 13:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:04.443 13:16:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.443 13:16:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:04.443 [2024-11-17 13:16:53.400153] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:04.443 [2024-11-17 13:16:53.402152] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:04.443 [2024-11-17 13:16:53.402262] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:04.443 13:16:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:04.443 13:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:04.443 13:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:04.443 13:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:04.443 13:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:04.443 13:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:04.443 13:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:04.443 13:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:04.443 13:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:04.443 13:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:04.443 13:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:04.443 13:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:04.443 13:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:04.443 13:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:04.443 13:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:04.443 13:16:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.443 13:16:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:04.443 13:16:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:04.443 13:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:04.443 "name": "Existed_Raid", 00:07:04.443 "uuid": "811c9c47-6187-496a-a0a8-c9641bec9b00", 00:07:04.443 "strip_size_kb": 64, 00:07:04.443 "state": "configuring", 00:07:04.443 "raid_level": "raid0", 00:07:04.443 "superblock": true, 00:07:04.443 "num_base_bdevs": 2, 00:07:04.443 "num_base_bdevs_discovered": 1, 00:07:04.443 "num_base_bdevs_operational": 2, 00:07:04.443 "base_bdevs_list": [ 00:07:04.443 { 00:07:04.443 "name": "BaseBdev1", 00:07:04.443 "uuid": "68a34c5a-4831-46a1-a138-b84afc20f87f", 00:07:04.443 "is_configured": true, 00:07:04.443 "data_offset": 2048, 00:07:04.443 "data_size": 63488 00:07:04.443 }, 00:07:04.443 { 00:07:04.443 "name": "BaseBdev2", 00:07:04.443 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:04.443 "is_configured": false, 00:07:04.443 "data_offset": 0, 00:07:04.443 "data_size": 0 00:07:04.443 } 00:07:04.443 ] 00:07:04.443 }' 00:07:04.443 13:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:04.443 13:16:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:04.703 13:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:04.703 13:16:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.703 13:16:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:04.703 [2024-11-17 13:16:53.869167] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:04.703 [2024-11-17 13:16:53.869564] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:04.703 [2024-11-17 13:16:53.869618] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:04.703 [2024-11-17 13:16:53.869945] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:04.703 BaseBdev2 00:07:04.703 [2024-11-17 13:16:53.870155] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:04.703 [2024-11-17 13:16:53.870177] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:04.703 [2024-11-17 13:16:53.870343] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:04.703 13:16:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:04.703 13:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:04.703 13:16:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:04.703 13:16:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:04.703 13:16:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:04.703 13:16:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:04.703 13:16:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:04.703 13:16:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:04.703 13:16:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.703 13:16:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:04.703 13:16:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:04.703 13:16:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:04.703 13:16:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.703 13:16:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:04.703 [ 00:07:04.703 { 00:07:04.703 "name": "BaseBdev2", 00:07:04.703 "aliases": [ 00:07:04.703 "8269d5cd-04cc-4749-b26d-c825b4cece26" 00:07:04.703 ], 00:07:04.703 "product_name": "Malloc disk", 00:07:04.703 "block_size": 512, 00:07:04.703 "num_blocks": 65536, 00:07:04.703 "uuid": "8269d5cd-04cc-4749-b26d-c825b4cece26", 00:07:04.703 "assigned_rate_limits": { 00:07:04.703 "rw_ios_per_sec": 0, 00:07:04.703 "rw_mbytes_per_sec": 0, 00:07:04.703 "r_mbytes_per_sec": 0, 00:07:04.703 "w_mbytes_per_sec": 0 00:07:04.703 }, 00:07:04.703 "claimed": true, 00:07:04.703 "claim_type": "exclusive_write", 00:07:04.703 "zoned": false, 00:07:04.703 "supported_io_types": { 00:07:04.703 "read": true, 00:07:04.703 "write": true, 00:07:04.703 "unmap": true, 00:07:04.703 "flush": true, 00:07:04.703 "reset": true, 00:07:04.703 "nvme_admin": false, 00:07:04.703 "nvme_io": false, 00:07:04.703 "nvme_io_md": false, 00:07:04.703 "write_zeroes": true, 00:07:04.703 "zcopy": true, 00:07:04.703 "get_zone_info": false, 00:07:04.703 "zone_management": false, 00:07:04.703 "zone_append": false, 00:07:04.703 "compare": false, 00:07:04.703 "compare_and_write": false, 00:07:04.703 "abort": true, 00:07:04.703 "seek_hole": false, 00:07:04.703 "seek_data": false, 00:07:04.703 "copy": true, 00:07:04.703 "nvme_iov_md": false 00:07:04.703 }, 00:07:04.703 "memory_domains": [ 00:07:04.703 { 00:07:04.703 "dma_device_id": "system", 00:07:04.703 "dma_device_type": 1 00:07:04.703 }, 00:07:04.703 { 00:07:04.703 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:04.703 "dma_device_type": 2 00:07:04.703 } 00:07:04.703 ], 00:07:04.703 "driver_specific": {} 00:07:04.703 } 00:07:04.703 ] 00:07:04.703 13:16:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:04.703 13:16:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:04.703 13:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:04.703 13:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:04.703 13:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:07:04.703 13:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:04.703 13:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:04.703 13:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:04.703 13:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:04.703 13:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:04.703 13:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:04.703 13:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:04.703 13:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:04.703 13:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:04.703 13:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:04.703 13:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:04.703 13:16:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.703 13:16:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:04.963 13:16:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:04.963 13:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:04.963 "name": "Existed_Raid", 00:07:04.963 "uuid": "811c9c47-6187-496a-a0a8-c9641bec9b00", 00:07:04.963 "strip_size_kb": 64, 00:07:04.963 "state": "online", 00:07:04.963 "raid_level": "raid0", 00:07:04.963 "superblock": true, 00:07:04.963 "num_base_bdevs": 2, 00:07:04.963 "num_base_bdevs_discovered": 2, 00:07:04.963 "num_base_bdevs_operational": 2, 00:07:04.963 "base_bdevs_list": [ 00:07:04.963 { 00:07:04.963 "name": "BaseBdev1", 00:07:04.963 "uuid": "68a34c5a-4831-46a1-a138-b84afc20f87f", 00:07:04.963 "is_configured": true, 00:07:04.963 "data_offset": 2048, 00:07:04.963 "data_size": 63488 00:07:04.963 }, 00:07:04.963 { 00:07:04.963 "name": "BaseBdev2", 00:07:04.963 "uuid": "8269d5cd-04cc-4749-b26d-c825b4cece26", 00:07:04.963 "is_configured": true, 00:07:04.963 "data_offset": 2048, 00:07:04.963 "data_size": 63488 00:07:04.963 } 00:07:04.963 ] 00:07:04.963 }' 00:07:04.963 13:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:04.963 13:16:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:05.223 13:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:05.223 13:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:05.223 13:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:05.223 13:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:05.223 13:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:07:05.223 13:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:05.223 13:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:05.223 13:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:05.223 13:16:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.223 13:16:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:05.223 [2024-11-17 13:16:54.384619] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:05.223 13:16:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.223 13:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:05.223 "name": "Existed_Raid", 00:07:05.223 "aliases": [ 00:07:05.223 "811c9c47-6187-496a-a0a8-c9641bec9b00" 00:07:05.223 ], 00:07:05.223 "product_name": "Raid Volume", 00:07:05.223 "block_size": 512, 00:07:05.223 "num_blocks": 126976, 00:07:05.223 "uuid": "811c9c47-6187-496a-a0a8-c9641bec9b00", 00:07:05.223 "assigned_rate_limits": { 00:07:05.223 "rw_ios_per_sec": 0, 00:07:05.223 "rw_mbytes_per_sec": 0, 00:07:05.223 "r_mbytes_per_sec": 0, 00:07:05.223 "w_mbytes_per_sec": 0 00:07:05.223 }, 00:07:05.223 "claimed": false, 00:07:05.223 "zoned": false, 00:07:05.223 "supported_io_types": { 00:07:05.223 "read": true, 00:07:05.223 "write": true, 00:07:05.223 "unmap": true, 00:07:05.223 "flush": true, 00:07:05.223 "reset": true, 00:07:05.223 "nvme_admin": false, 00:07:05.223 "nvme_io": false, 00:07:05.223 "nvme_io_md": false, 00:07:05.223 "write_zeroes": true, 00:07:05.223 "zcopy": false, 00:07:05.223 "get_zone_info": false, 00:07:05.223 "zone_management": false, 00:07:05.223 "zone_append": false, 00:07:05.223 "compare": false, 00:07:05.223 "compare_and_write": false, 00:07:05.223 "abort": false, 00:07:05.223 "seek_hole": false, 00:07:05.223 "seek_data": false, 00:07:05.223 "copy": false, 00:07:05.223 "nvme_iov_md": false 00:07:05.223 }, 00:07:05.223 "memory_domains": [ 00:07:05.223 { 00:07:05.223 "dma_device_id": "system", 00:07:05.223 "dma_device_type": 1 00:07:05.223 }, 00:07:05.223 { 00:07:05.223 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:05.223 "dma_device_type": 2 00:07:05.223 }, 00:07:05.223 { 00:07:05.223 "dma_device_id": "system", 00:07:05.223 "dma_device_type": 1 00:07:05.223 }, 00:07:05.223 { 00:07:05.223 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:05.223 "dma_device_type": 2 00:07:05.223 } 00:07:05.223 ], 00:07:05.223 "driver_specific": { 00:07:05.223 "raid": { 00:07:05.223 "uuid": "811c9c47-6187-496a-a0a8-c9641bec9b00", 00:07:05.223 "strip_size_kb": 64, 00:07:05.223 "state": "online", 00:07:05.223 "raid_level": "raid0", 00:07:05.223 "superblock": true, 00:07:05.223 "num_base_bdevs": 2, 00:07:05.223 "num_base_bdevs_discovered": 2, 00:07:05.223 "num_base_bdevs_operational": 2, 00:07:05.223 "base_bdevs_list": [ 00:07:05.223 { 00:07:05.223 "name": "BaseBdev1", 00:07:05.223 "uuid": "68a34c5a-4831-46a1-a138-b84afc20f87f", 00:07:05.223 "is_configured": true, 00:07:05.223 "data_offset": 2048, 00:07:05.223 "data_size": 63488 00:07:05.223 }, 00:07:05.223 { 00:07:05.223 "name": "BaseBdev2", 00:07:05.223 "uuid": "8269d5cd-04cc-4749-b26d-c825b4cece26", 00:07:05.223 "is_configured": true, 00:07:05.223 "data_offset": 2048, 00:07:05.223 "data_size": 63488 00:07:05.223 } 00:07:05.223 ] 00:07:05.223 } 00:07:05.223 } 00:07:05.223 }' 00:07:05.223 13:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:05.483 13:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:05.483 BaseBdev2' 00:07:05.483 13:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:05.483 13:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:05.483 13:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:05.483 13:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:05.483 13:16:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.483 13:16:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:05.483 13:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:05.483 13:16:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.483 13:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:05.483 13:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:05.483 13:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:05.483 13:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:05.483 13:16:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.483 13:16:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:05.483 13:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:05.483 13:16:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.483 13:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:05.483 13:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:05.483 13:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:05.483 13:16:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.483 13:16:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:05.483 [2024-11-17 13:16:54.612002] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:05.483 [2024-11-17 13:16:54.612095] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:05.483 [2024-11-17 13:16:54.612156] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:05.742 13:16:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.742 13:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:05.742 13:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:07:05.742 13:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:05.742 13:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:07:05.742 13:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:05.742 13:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:07:05.742 13:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:05.742 13:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:05.742 13:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:05.742 13:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:05.742 13:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:05.742 13:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:05.742 13:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:05.742 13:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:05.742 13:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:05.742 13:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:05.742 13:16:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.742 13:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:05.742 13:16:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:05.742 13:16:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.742 13:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:05.742 "name": "Existed_Raid", 00:07:05.742 "uuid": "811c9c47-6187-496a-a0a8-c9641bec9b00", 00:07:05.742 "strip_size_kb": 64, 00:07:05.742 "state": "offline", 00:07:05.742 "raid_level": "raid0", 00:07:05.742 "superblock": true, 00:07:05.742 "num_base_bdevs": 2, 00:07:05.742 "num_base_bdevs_discovered": 1, 00:07:05.742 "num_base_bdevs_operational": 1, 00:07:05.742 "base_bdevs_list": [ 00:07:05.742 { 00:07:05.742 "name": null, 00:07:05.742 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:05.742 "is_configured": false, 00:07:05.742 "data_offset": 0, 00:07:05.742 "data_size": 63488 00:07:05.742 }, 00:07:05.742 { 00:07:05.742 "name": "BaseBdev2", 00:07:05.742 "uuid": "8269d5cd-04cc-4749-b26d-c825b4cece26", 00:07:05.742 "is_configured": true, 00:07:05.742 "data_offset": 2048, 00:07:05.742 "data_size": 63488 00:07:05.742 } 00:07:05.742 ] 00:07:05.742 }' 00:07:05.742 13:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:05.742 13:16:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:06.002 13:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:06.002 13:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:06.002 13:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:06.002 13:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.002 13:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:06.002 13:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:06.002 13:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.002 13:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:06.002 13:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:06.002 13:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:06.002 13:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.002 13:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:06.002 [2024-11-17 13:16:55.212596] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:06.002 [2024-11-17 13:16:55.212694] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:06.261 13:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.261 13:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:06.261 13:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:06.261 13:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:06.261 13:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:06.262 13:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.262 13:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:06.262 13:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.262 13:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:06.262 13:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:06.262 13:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:06.262 13:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 60924 00:07:06.262 13:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 60924 ']' 00:07:06.262 13:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 60924 00:07:06.262 13:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:07:06.262 13:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:06.262 13:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60924 00:07:06.262 killing process with pid 60924 00:07:06.262 13:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:06.262 13:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:06.262 13:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60924' 00:07:06.262 13:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 60924 00:07:06.262 [2024-11-17 13:16:55.396137] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:06.262 13:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 60924 00:07:06.262 [2024-11-17 13:16:55.413754] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:07.653 13:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:07:07.653 00:07:07.653 real 0m5.143s 00:07:07.653 user 0m7.386s 00:07:07.653 sys 0m0.811s 00:07:07.653 ************************************ 00:07:07.653 END TEST raid_state_function_test_sb 00:07:07.653 ************************************ 00:07:07.653 13:16:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:07.653 13:16:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:07.653 13:16:56 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:07:07.653 13:16:56 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:07.653 13:16:56 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:07.653 13:16:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:07.653 ************************************ 00:07:07.653 START TEST raid_superblock_test 00:07:07.653 ************************************ 00:07:07.653 13:16:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 2 00:07:07.653 13:16:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:07:07.653 13:16:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:07:07.653 13:16:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:07:07.653 13:16:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:07:07.653 13:16:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:07:07.653 13:16:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:07:07.653 13:16:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:07:07.653 13:16:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:07:07.653 13:16:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:07:07.653 13:16:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:07:07.653 13:16:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:07:07.653 13:16:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:07:07.653 13:16:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:07:07.653 13:16:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:07:07.653 13:16:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:07:07.653 13:16:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:07:07.653 13:16:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=61175 00:07:07.653 13:16:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:07:07.653 13:16:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 61175 00:07:07.653 13:16:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 61175 ']' 00:07:07.653 13:16:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:07.653 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:07.653 13:16:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:07.653 13:16:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:07.653 13:16:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:07.653 13:16:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.653 [2024-11-17 13:16:56.760864] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:07:07.653 [2024-11-17 13:16:56.760995] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61175 ] 00:07:07.912 [2024-11-17 13:16:56.935524] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.912 [2024-11-17 13:16:57.064045] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.171 [2024-11-17 13:16:57.275721] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:08.172 [2024-11-17 13:16:57.275856] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:08.431 13:16:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:08.431 13:16:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:07:08.431 13:16:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:07:08.431 13:16:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:08.431 13:16:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:07:08.431 13:16:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:07:08.431 13:16:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:07:08.431 13:16:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:08.431 13:16:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:08.431 13:16:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:08.431 13:16:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:07:08.431 13:16:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.431 13:16:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.691 malloc1 00:07:08.691 13:16:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.691 13:16:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:08.692 13:16:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.692 13:16:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.692 [2024-11-17 13:16:57.682409] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:08.692 [2024-11-17 13:16:57.682548] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:08.692 [2024-11-17 13:16:57.682605] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:08.692 [2024-11-17 13:16:57.682686] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:08.692 [2024-11-17 13:16:57.685341] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:08.692 [2024-11-17 13:16:57.685422] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:08.692 pt1 00:07:08.692 13:16:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.692 13:16:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:08.692 13:16:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:08.692 13:16:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:07:08.692 13:16:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:07:08.692 13:16:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:07:08.692 13:16:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:08.692 13:16:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:08.692 13:16:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:08.692 13:16:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:07:08.692 13:16:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.692 13:16:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.692 malloc2 00:07:08.692 13:16:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.692 13:16:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:08.692 13:16:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.692 13:16:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.692 [2024-11-17 13:16:57.746641] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:08.692 [2024-11-17 13:16:57.746701] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:08.692 [2024-11-17 13:16:57.746744] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:07:08.692 [2024-11-17 13:16:57.746754] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:08.692 [2024-11-17 13:16:57.749145] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:08.692 [2024-11-17 13:16:57.749246] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:08.692 pt2 00:07:08.692 13:16:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.692 13:16:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:08.692 13:16:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:08.692 13:16:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:07:08.692 13:16:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.692 13:16:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.692 [2024-11-17 13:16:57.758752] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:08.692 [2024-11-17 13:16:57.760901] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:08.692 [2024-11-17 13:16:57.761166] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:08.692 [2024-11-17 13:16:57.761187] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:08.692 [2024-11-17 13:16:57.761517] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:08.692 [2024-11-17 13:16:57.761699] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:08.692 [2024-11-17 13:16:57.761712] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:07:08.692 [2024-11-17 13:16:57.761898] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:08.692 13:16:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.692 13:16:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:08.692 13:16:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:08.692 13:16:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:08.692 13:16:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:08.692 13:16:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:08.692 13:16:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:08.692 13:16:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:08.692 13:16:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:08.692 13:16:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:08.692 13:16:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:08.692 13:16:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:08.692 13:16:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:08.692 13:16:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.692 13:16:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.692 13:16:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.692 13:16:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:08.692 "name": "raid_bdev1", 00:07:08.692 "uuid": "d68fb415-8f31-4c83-aa7c-ed2b954f9d71", 00:07:08.692 "strip_size_kb": 64, 00:07:08.692 "state": "online", 00:07:08.692 "raid_level": "raid0", 00:07:08.692 "superblock": true, 00:07:08.692 "num_base_bdevs": 2, 00:07:08.692 "num_base_bdevs_discovered": 2, 00:07:08.692 "num_base_bdevs_operational": 2, 00:07:08.692 "base_bdevs_list": [ 00:07:08.692 { 00:07:08.692 "name": "pt1", 00:07:08.692 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:08.692 "is_configured": true, 00:07:08.692 "data_offset": 2048, 00:07:08.692 "data_size": 63488 00:07:08.692 }, 00:07:08.692 { 00:07:08.692 "name": "pt2", 00:07:08.692 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:08.692 "is_configured": true, 00:07:08.692 "data_offset": 2048, 00:07:08.692 "data_size": 63488 00:07:08.692 } 00:07:08.692 ] 00:07:08.692 }' 00:07:08.692 13:16:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:08.692 13:16:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.952 13:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:07:08.952 13:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:08.952 13:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:08.952 13:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:08.952 13:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:08.952 13:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:09.213 13:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:09.213 13:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:09.213 13:16:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.213 13:16:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.213 [2024-11-17 13:16:58.182286] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:09.213 13:16:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.213 13:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:09.213 "name": "raid_bdev1", 00:07:09.213 "aliases": [ 00:07:09.213 "d68fb415-8f31-4c83-aa7c-ed2b954f9d71" 00:07:09.213 ], 00:07:09.213 "product_name": "Raid Volume", 00:07:09.213 "block_size": 512, 00:07:09.213 "num_blocks": 126976, 00:07:09.213 "uuid": "d68fb415-8f31-4c83-aa7c-ed2b954f9d71", 00:07:09.213 "assigned_rate_limits": { 00:07:09.213 "rw_ios_per_sec": 0, 00:07:09.213 "rw_mbytes_per_sec": 0, 00:07:09.213 "r_mbytes_per_sec": 0, 00:07:09.213 "w_mbytes_per_sec": 0 00:07:09.213 }, 00:07:09.213 "claimed": false, 00:07:09.213 "zoned": false, 00:07:09.213 "supported_io_types": { 00:07:09.213 "read": true, 00:07:09.213 "write": true, 00:07:09.213 "unmap": true, 00:07:09.213 "flush": true, 00:07:09.213 "reset": true, 00:07:09.213 "nvme_admin": false, 00:07:09.213 "nvme_io": false, 00:07:09.213 "nvme_io_md": false, 00:07:09.213 "write_zeroes": true, 00:07:09.213 "zcopy": false, 00:07:09.213 "get_zone_info": false, 00:07:09.213 "zone_management": false, 00:07:09.213 "zone_append": false, 00:07:09.213 "compare": false, 00:07:09.213 "compare_and_write": false, 00:07:09.213 "abort": false, 00:07:09.213 "seek_hole": false, 00:07:09.213 "seek_data": false, 00:07:09.213 "copy": false, 00:07:09.213 "nvme_iov_md": false 00:07:09.213 }, 00:07:09.213 "memory_domains": [ 00:07:09.213 { 00:07:09.213 "dma_device_id": "system", 00:07:09.213 "dma_device_type": 1 00:07:09.213 }, 00:07:09.213 { 00:07:09.213 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:09.213 "dma_device_type": 2 00:07:09.213 }, 00:07:09.213 { 00:07:09.213 "dma_device_id": "system", 00:07:09.213 "dma_device_type": 1 00:07:09.213 }, 00:07:09.213 { 00:07:09.213 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:09.213 "dma_device_type": 2 00:07:09.213 } 00:07:09.213 ], 00:07:09.213 "driver_specific": { 00:07:09.213 "raid": { 00:07:09.213 "uuid": "d68fb415-8f31-4c83-aa7c-ed2b954f9d71", 00:07:09.213 "strip_size_kb": 64, 00:07:09.213 "state": "online", 00:07:09.213 "raid_level": "raid0", 00:07:09.213 "superblock": true, 00:07:09.213 "num_base_bdevs": 2, 00:07:09.213 "num_base_bdevs_discovered": 2, 00:07:09.213 "num_base_bdevs_operational": 2, 00:07:09.213 "base_bdevs_list": [ 00:07:09.213 { 00:07:09.213 "name": "pt1", 00:07:09.213 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:09.213 "is_configured": true, 00:07:09.213 "data_offset": 2048, 00:07:09.213 "data_size": 63488 00:07:09.213 }, 00:07:09.213 { 00:07:09.213 "name": "pt2", 00:07:09.214 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:09.214 "is_configured": true, 00:07:09.214 "data_offset": 2048, 00:07:09.214 "data_size": 63488 00:07:09.214 } 00:07:09.214 ] 00:07:09.214 } 00:07:09.214 } 00:07:09.214 }' 00:07:09.214 13:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:09.214 13:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:09.214 pt2' 00:07:09.214 13:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:09.214 13:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:09.214 13:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:09.214 13:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:09.214 13:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:09.214 13:16:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.214 13:16:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.214 13:16:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.214 13:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:09.214 13:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:09.214 13:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:09.214 13:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:09.214 13:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:09.214 13:16:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.214 13:16:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.214 13:16:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.214 13:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:09.214 13:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:09.214 13:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:09.214 13:16:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.214 13:16:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.214 13:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:07:09.214 [2024-11-17 13:16:58.401900] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:09.214 13:16:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.214 13:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=d68fb415-8f31-4c83-aa7c-ed2b954f9d71 00:07:09.214 13:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z d68fb415-8f31-4c83-aa7c-ed2b954f9d71 ']' 00:07:09.214 13:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:09.214 13:16:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.214 13:16:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.214 [2024-11-17 13:16:58.429544] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:09.214 [2024-11-17 13:16:58.429579] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:09.214 [2024-11-17 13:16:58.429693] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:09.214 [2024-11-17 13:16:58.429759] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:09.214 [2024-11-17 13:16:58.429776] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:07:09.214 13:16:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.475 13:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:09.475 13:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:07:09.475 13:16:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.475 13:16:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.475 13:16:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.475 13:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:07:09.475 13:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:07:09.475 13:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:09.475 13:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:07:09.475 13:16:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.475 13:16:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.475 13:16:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.475 13:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:09.475 13:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:07:09.475 13:16:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.475 13:16:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.475 13:16:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.475 13:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:07:09.475 13:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:07:09.475 13:16:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.475 13:16:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.475 13:16:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.475 13:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:07:09.475 13:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:09.475 13:16:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:07:09.475 13:16:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:09.475 13:16:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:07:09.475 13:16:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:09.475 13:16:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:07:09.475 13:16:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:09.475 13:16:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:09.475 13:16:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.475 13:16:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.475 [2024-11-17 13:16:58.569380] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:09.475 [2024-11-17 13:16:58.571351] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:09.475 [2024-11-17 13:16:58.571421] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:07:09.475 [2024-11-17 13:16:58.571473] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:07:09.475 [2024-11-17 13:16:58.571497] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:09.475 [2024-11-17 13:16:58.571510] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:07:09.475 request: 00:07:09.475 { 00:07:09.475 "name": "raid_bdev1", 00:07:09.475 "raid_level": "raid0", 00:07:09.475 "base_bdevs": [ 00:07:09.475 "malloc1", 00:07:09.475 "malloc2" 00:07:09.475 ], 00:07:09.475 "strip_size_kb": 64, 00:07:09.475 "superblock": false, 00:07:09.475 "method": "bdev_raid_create", 00:07:09.475 "req_id": 1 00:07:09.475 } 00:07:09.475 Got JSON-RPC error response 00:07:09.475 response: 00:07:09.475 { 00:07:09.475 "code": -17, 00:07:09.475 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:07:09.475 } 00:07:09.475 13:16:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:09.475 13:16:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:07:09.475 13:16:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:09.475 13:16:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:09.475 13:16:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:09.475 13:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:09.475 13:16:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.475 13:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:07:09.475 13:16:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.475 13:16:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.475 13:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:07:09.475 13:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:07:09.475 13:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:09.475 13:16:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.475 13:16:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.475 [2024-11-17 13:16:58.633206] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:09.475 [2024-11-17 13:16:58.633335] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:09.475 [2024-11-17 13:16:58.633375] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:07:09.475 [2024-11-17 13:16:58.633418] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:09.475 [2024-11-17 13:16:58.635622] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:09.475 [2024-11-17 13:16:58.635707] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:09.475 [2024-11-17 13:16:58.635836] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:09.475 [2024-11-17 13:16:58.635944] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:09.475 pt1 00:07:09.475 13:16:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.475 13:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:07:09.475 13:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:09.475 13:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:09.475 13:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:09.475 13:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:09.476 13:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:09.476 13:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:09.476 13:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:09.476 13:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:09.476 13:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:09.476 13:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:09.476 13:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:09.476 13:16:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.476 13:16:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.476 13:16:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.476 13:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:09.476 "name": "raid_bdev1", 00:07:09.476 "uuid": "d68fb415-8f31-4c83-aa7c-ed2b954f9d71", 00:07:09.476 "strip_size_kb": 64, 00:07:09.476 "state": "configuring", 00:07:09.476 "raid_level": "raid0", 00:07:09.476 "superblock": true, 00:07:09.476 "num_base_bdevs": 2, 00:07:09.476 "num_base_bdevs_discovered": 1, 00:07:09.476 "num_base_bdevs_operational": 2, 00:07:09.476 "base_bdevs_list": [ 00:07:09.476 { 00:07:09.476 "name": "pt1", 00:07:09.476 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:09.476 "is_configured": true, 00:07:09.476 "data_offset": 2048, 00:07:09.476 "data_size": 63488 00:07:09.476 }, 00:07:09.476 { 00:07:09.476 "name": null, 00:07:09.476 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:09.476 "is_configured": false, 00:07:09.476 "data_offset": 2048, 00:07:09.476 "data_size": 63488 00:07:09.476 } 00:07:09.476 ] 00:07:09.476 }' 00:07:09.476 13:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:09.476 13:16:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.045 13:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:07:10.045 13:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:07:10.045 13:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:10.045 13:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:10.045 13:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.045 13:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.045 [2024-11-17 13:16:59.100450] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:10.045 [2024-11-17 13:16:59.100546] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:10.045 [2024-11-17 13:16:59.100569] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:07:10.045 [2024-11-17 13:16:59.100580] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:10.045 [2024-11-17 13:16:59.101041] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:10.045 [2024-11-17 13:16:59.101060] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:10.045 [2024-11-17 13:16:59.101144] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:10.045 [2024-11-17 13:16:59.101167] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:10.045 [2024-11-17 13:16:59.101293] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:10.045 [2024-11-17 13:16:59.101305] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:10.045 [2024-11-17 13:16:59.101544] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:07:10.045 [2024-11-17 13:16:59.101706] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:10.045 [2024-11-17 13:16:59.101757] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:10.045 [2024-11-17 13:16:59.101903] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:10.045 pt2 00:07:10.045 13:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:10.045 13:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:07:10.045 13:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:10.045 13:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:10.045 13:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:10.045 13:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:10.045 13:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:10.045 13:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:10.045 13:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:10.045 13:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:10.045 13:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:10.045 13:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:10.045 13:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:10.045 13:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:10.045 13:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.045 13:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.045 13:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:10.045 13:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:10.045 13:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:10.045 "name": "raid_bdev1", 00:07:10.045 "uuid": "d68fb415-8f31-4c83-aa7c-ed2b954f9d71", 00:07:10.045 "strip_size_kb": 64, 00:07:10.045 "state": "online", 00:07:10.045 "raid_level": "raid0", 00:07:10.045 "superblock": true, 00:07:10.045 "num_base_bdevs": 2, 00:07:10.045 "num_base_bdevs_discovered": 2, 00:07:10.045 "num_base_bdevs_operational": 2, 00:07:10.045 "base_bdevs_list": [ 00:07:10.045 { 00:07:10.045 "name": "pt1", 00:07:10.045 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:10.045 "is_configured": true, 00:07:10.045 "data_offset": 2048, 00:07:10.045 "data_size": 63488 00:07:10.045 }, 00:07:10.045 { 00:07:10.045 "name": "pt2", 00:07:10.045 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:10.045 "is_configured": true, 00:07:10.045 "data_offset": 2048, 00:07:10.045 "data_size": 63488 00:07:10.045 } 00:07:10.045 ] 00:07:10.045 }' 00:07:10.045 13:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:10.045 13:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.616 13:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:07:10.616 13:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:10.616 13:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:10.616 13:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:10.616 13:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:10.616 13:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:10.616 13:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:10.616 13:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:10.616 13:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.616 13:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.616 [2024-11-17 13:16:59.555865] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:10.616 13:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:10.616 13:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:10.616 "name": "raid_bdev1", 00:07:10.616 "aliases": [ 00:07:10.616 "d68fb415-8f31-4c83-aa7c-ed2b954f9d71" 00:07:10.616 ], 00:07:10.616 "product_name": "Raid Volume", 00:07:10.616 "block_size": 512, 00:07:10.616 "num_blocks": 126976, 00:07:10.616 "uuid": "d68fb415-8f31-4c83-aa7c-ed2b954f9d71", 00:07:10.616 "assigned_rate_limits": { 00:07:10.616 "rw_ios_per_sec": 0, 00:07:10.616 "rw_mbytes_per_sec": 0, 00:07:10.616 "r_mbytes_per_sec": 0, 00:07:10.616 "w_mbytes_per_sec": 0 00:07:10.616 }, 00:07:10.616 "claimed": false, 00:07:10.616 "zoned": false, 00:07:10.616 "supported_io_types": { 00:07:10.616 "read": true, 00:07:10.616 "write": true, 00:07:10.616 "unmap": true, 00:07:10.616 "flush": true, 00:07:10.616 "reset": true, 00:07:10.616 "nvme_admin": false, 00:07:10.616 "nvme_io": false, 00:07:10.616 "nvme_io_md": false, 00:07:10.616 "write_zeroes": true, 00:07:10.616 "zcopy": false, 00:07:10.616 "get_zone_info": false, 00:07:10.616 "zone_management": false, 00:07:10.616 "zone_append": false, 00:07:10.616 "compare": false, 00:07:10.616 "compare_and_write": false, 00:07:10.616 "abort": false, 00:07:10.616 "seek_hole": false, 00:07:10.616 "seek_data": false, 00:07:10.616 "copy": false, 00:07:10.616 "nvme_iov_md": false 00:07:10.616 }, 00:07:10.616 "memory_domains": [ 00:07:10.616 { 00:07:10.616 "dma_device_id": "system", 00:07:10.616 "dma_device_type": 1 00:07:10.616 }, 00:07:10.616 { 00:07:10.616 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:10.616 "dma_device_type": 2 00:07:10.616 }, 00:07:10.616 { 00:07:10.616 "dma_device_id": "system", 00:07:10.616 "dma_device_type": 1 00:07:10.616 }, 00:07:10.616 { 00:07:10.616 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:10.616 "dma_device_type": 2 00:07:10.616 } 00:07:10.616 ], 00:07:10.616 "driver_specific": { 00:07:10.616 "raid": { 00:07:10.616 "uuid": "d68fb415-8f31-4c83-aa7c-ed2b954f9d71", 00:07:10.616 "strip_size_kb": 64, 00:07:10.616 "state": "online", 00:07:10.616 "raid_level": "raid0", 00:07:10.616 "superblock": true, 00:07:10.616 "num_base_bdevs": 2, 00:07:10.616 "num_base_bdevs_discovered": 2, 00:07:10.616 "num_base_bdevs_operational": 2, 00:07:10.616 "base_bdevs_list": [ 00:07:10.616 { 00:07:10.616 "name": "pt1", 00:07:10.616 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:10.616 "is_configured": true, 00:07:10.616 "data_offset": 2048, 00:07:10.616 "data_size": 63488 00:07:10.616 }, 00:07:10.616 { 00:07:10.616 "name": "pt2", 00:07:10.616 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:10.616 "is_configured": true, 00:07:10.616 "data_offset": 2048, 00:07:10.616 "data_size": 63488 00:07:10.616 } 00:07:10.616 ] 00:07:10.616 } 00:07:10.616 } 00:07:10.616 }' 00:07:10.616 13:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:10.616 13:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:10.616 pt2' 00:07:10.616 13:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:10.616 13:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:10.616 13:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:10.616 13:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:10.616 13:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:10.616 13:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.616 13:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.616 13:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:10.616 13:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:10.616 13:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:10.616 13:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:10.616 13:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:10.616 13:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.616 13:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.616 13:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:10.616 13:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:10.616 13:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:10.616 13:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:10.616 13:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:10.616 13:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:07:10.616 13:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.616 13:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.616 [2024-11-17 13:16:59.743509] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:10.616 13:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:10.616 13:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' d68fb415-8f31-4c83-aa7c-ed2b954f9d71 '!=' d68fb415-8f31-4c83-aa7c-ed2b954f9d71 ']' 00:07:10.616 13:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:07:10.616 13:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:10.616 13:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:10.616 13:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 61175 00:07:10.616 13:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 61175 ']' 00:07:10.616 13:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 61175 00:07:10.616 13:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:07:10.616 13:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:10.616 13:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61175 00:07:10.616 13:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:10.616 13:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:10.617 13:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61175' 00:07:10.617 killing process with pid 61175 00:07:10.617 13:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 61175 00:07:10.617 [2024-11-17 13:16:59.827008] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:10.617 13:16:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 61175 00:07:10.617 [2024-11-17 13:16:59.827187] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:10.617 [2024-11-17 13:16:59.827257] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:10.617 [2024-11-17 13:16:59.827271] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:10.876 [2024-11-17 13:17:00.030496] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:12.304 13:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:07:12.304 00:07:12.304 real 0m4.437s 00:07:12.304 user 0m6.250s 00:07:12.304 sys 0m0.746s 00:07:12.304 13:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:12.304 13:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.304 ************************************ 00:07:12.304 END TEST raid_superblock_test 00:07:12.304 ************************************ 00:07:12.304 13:17:01 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 2 read 00:07:12.304 13:17:01 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:12.304 13:17:01 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:12.304 13:17:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:12.304 ************************************ 00:07:12.304 START TEST raid_read_error_test 00:07:12.304 ************************************ 00:07:12.304 13:17:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 2 read 00:07:12.304 13:17:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:07:12.304 13:17:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:12.304 13:17:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:07:12.304 13:17:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:12.304 13:17:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:12.304 13:17:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:12.304 13:17:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:12.304 13:17:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:12.304 13:17:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:12.304 13:17:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:12.304 13:17:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:12.304 13:17:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:12.304 13:17:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:12.304 13:17:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:12.304 13:17:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:12.304 13:17:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:12.304 13:17:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:12.304 13:17:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:12.304 13:17:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:07:12.304 13:17:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:12.304 13:17:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:12.304 13:17:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:12.304 13:17:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.9dUi0jyiAu 00:07:12.304 13:17:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61382 00:07:12.304 13:17:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:12.305 13:17:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61382 00:07:12.305 13:17:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 61382 ']' 00:07:12.305 13:17:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:12.305 13:17:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:12.305 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:12.305 13:17:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:12.305 13:17:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:12.305 13:17:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.305 [2024-11-17 13:17:01.282034] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:07:12.305 [2024-11-17 13:17:01.282157] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61382 ] 00:07:12.305 [2024-11-17 13:17:01.455364] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.565 [2024-11-17 13:17:01.567465] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.565 [2024-11-17 13:17:01.757341] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:12.565 [2024-11-17 13:17:01.757391] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:13.137 13:17:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:13.137 13:17:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:07:13.137 13:17:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:13.137 13:17:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:13.137 13:17:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:13.137 13:17:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.137 BaseBdev1_malloc 00:07:13.137 13:17:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:13.137 13:17:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:13.137 13:17:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:13.137 13:17:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.137 true 00:07:13.137 13:17:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:13.137 13:17:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:13.137 13:17:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:13.137 13:17:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.137 [2024-11-17 13:17:02.162626] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:13.137 [2024-11-17 13:17:02.162683] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:13.137 [2024-11-17 13:17:02.162703] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:13.137 [2024-11-17 13:17:02.162714] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:13.137 [2024-11-17 13:17:02.164840] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:13.137 [2024-11-17 13:17:02.164883] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:13.137 BaseBdev1 00:07:13.137 13:17:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:13.137 13:17:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:13.137 13:17:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:13.137 13:17:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:13.137 13:17:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.137 BaseBdev2_malloc 00:07:13.137 13:17:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:13.137 13:17:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:13.137 13:17:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:13.137 13:17:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.137 true 00:07:13.137 13:17:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:13.137 13:17:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:13.137 13:17:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:13.137 13:17:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.137 [2024-11-17 13:17:02.229126] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:13.137 [2024-11-17 13:17:02.229233] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:13.137 [2024-11-17 13:17:02.229253] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:13.137 [2024-11-17 13:17:02.229264] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:13.137 [2024-11-17 13:17:02.231302] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:13.137 [2024-11-17 13:17:02.231339] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:13.137 BaseBdev2 00:07:13.137 13:17:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:13.137 13:17:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:13.137 13:17:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:13.137 13:17:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.137 [2024-11-17 13:17:02.241166] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:13.137 [2024-11-17 13:17:02.243008] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:13.137 [2024-11-17 13:17:02.243196] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:13.137 [2024-11-17 13:17:02.243224] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:13.137 [2024-11-17 13:17:02.243433] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:07:13.137 [2024-11-17 13:17:02.243600] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:13.137 [2024-11-17 13:17:02.243611] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:13.137 [2024-11-17 13:17:02.243744] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:13.137 13:17:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:13.137 13:17:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:13.137 13:17:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:13.137 13:17:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:13.137 13:17:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:13.137 13:17:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:13.137 13:17:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:13.137 13:17:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:13.137 13:17:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:13.137 13:17:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:13.137 13:17:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:13.137 13:17:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:13.137 13:17:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:13.137 13:17:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:13.137 13:17:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.137 13:17:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:13.137 13:17:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:13.137 "name": "raid_bdev1", 00:07:13.137 "uuid": "24ec24ed-8cc3-486e-8e42-0ad0ecf83bbb", 00:07:13.137 "strip_size_kb": 64, 00:07:13.137 "state": "online", 00:07:13.137 "raid_level": "raid0", 00:07:13.137 "superblock": true, 00:07:13.137 "num_base_bdevs": 2, 00:07:13.137 "num_base_bdevs_discovered": 2, 00:07:13.137 "num_base_bdevs_operational": 2, 00:07:13.137 "base_bdevs_list": [ 00:07:13.137 { 00:07:13.137 "name": "BaseBdev1", 00:07:13.137 "uuid": "03e45431-8977-5bd4-89e3-b8aae5518d5c", 00:07:13.137 "is_configured": true, 00:07:13.137 "data_offset": 2048, 00:07:13.137 "data_size": 63488 00:07:13.137 }, 00:07:13.137 { 00:07:13.137 "name": "BaseBdev2", 00:07:13.137 "uuid": "1811c7fa-3c8b-50fa-a46d-abb74c1e9387", 00:07:13.137 "is_configured": true, 00:07:13.137 "data_offset": 2048, 00:07:13.137 "data_size": 63488 00:07:13.137 } 00:07:13.137 ] 00:07:13.137 }' 00:07:13.137 13:17:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:13.137 13:17:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.707 13:17:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:13.707 13:17:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:13.708 [2024-11-17 13:17:02.781563] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:07:14.647 13:17:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:07:14.647 13:17:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:14.647 13:17:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.647 13:17:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:14.647 13:17:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:14.647 13:17:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:07:14.647 13:17:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:14.647 13:17:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:14.647 13:17:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:14.647 13:17:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:14.647 13:17:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:14.647 13:17:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:14.647 13:17:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:14.647 13:17:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:14.647 13:17:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:14.647 13:17:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:14.647 13:17:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:14.647 13:17:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:14.647 13:17:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:14.647 13:17:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:14.647 13:17:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.647 13:17:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:14.647 13:17:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:14.647 "name": "raid_bdev1", 00:07:14.647 "uuid": "24ec24ed-8cc3-486e-8e42-0ad0ecf83bbb", 00:07:14.647 "strip_size_kb": 64, 00:07:14.647 "state": "online", 00:07:14.647 "raid_level": "raid0", 00:07:14.647 "superblock": true, 00:07:14.647 "num_base_bdevs": 2, 00:07:14.647 "num_base_bdevs_discovered": 2, 00:07:14.647 "num_base_bdevs_operational": 2, 00:07:14.647 "base_bdevs_list": [ 00:07:14.647 { 00:07:14.647 "name": "BaseBdev1", 00:07:14.647 "uuid": "03e45431-8977-5bd4-89e3-b8aae5518d5c", 00:07:14.647 "is_configured": true, 00:07:14.647 "data_offset": 2048, 00:07:14.647 "data_size": 63488 00:07:14.647 }, 00:07:14.647 { 00:07:14.647 "name": "BaseBdev2", 00:07:14.647 "uuid": "1811c7fa-3c8b-50fa-a46d-abb74c1e9387", 00:07:14.647 "is_configured": true, 00:07:14.647 "data_offset": 2048, 00:07:14.647 "data_size": 63488 00:07:14.647 } 00:07:14.647 ] 00:07:14.647 }' 00:07:14.647 13:17:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:14.647 13:17:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.218 13:17:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:15.218 13:17:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.218 13:17:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.218 [2024-11-17 13:17:04.138801] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:15.218 [2024-11-17 13:17:04.138837] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:15.218 [2024-11-17 13:17:04.141490] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:15.218 [2024-11-17 13:17:04.141541] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:15.218 [2024-11-17 13:17:04.141577] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:15.218 [2024-11-17 13:17:04.141590] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:15.218 { 00:07:15.218 "results": [ 00:07:15.218 { 00:07:15.218 "job": "raid_bdev1", 00:07:15.218 "core_mask": "0x1", 00:07:15.218 "workload": "randrw", 00:07:15.218 "percentage": 50, 00:07:15.218 "status": "finished", 00:07:15.218 "queue_depth": 1, 00:07:15.218 "io_size": 131072, 00:07:15.218 "runtime": 1.358001, 00:07:15.218 "iops": 16521.342767788832, 00:07:15.218 "mibps": 2065.167845973604, 00:07:15.218 "io_failed": 1, 00:07:15.218 "io_timeout": 0, 00:07:15.218 "avg_latency_us": 84.1219095174397, 00:07:15.218 "min_latency_us": 24.482096069868994, 00:07:15.218 "max_latency_us": 1387.989519650655 00:07:15.218 } 00:07:15.218 ], 00:07:15.218 "core_count": 1 00:07:15.218 } 00:07:15.218 13:17:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.218 13:17:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61382 00:07:15.218 13:17:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 61382 ']' 00:07:15.218 13:17:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 61382 00:07:15.218 13:17:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:07:15.218 13:17:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:15.218 13:17:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61382 00:07:15.218 13:17:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:15.218 13:17:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:15.218 13:17:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61382' 00:07:15.218 killing process with pid 61382 00:07:15.218 13:17:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 61382 00:07:15.218 [2024-11-17 13:17:04.188186] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:15.218 13:17:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 61382 00:07:15.218 [2024-11-17 13:17:04.327258] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:16.602 13:17:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:16.602 13:17:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.9dUi0jyiAu 00:07:16.602 13:17:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:16.602 13:17:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:07:16.602 13:17:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:07:16.602 ************************************ 00:07:16.602 END TEST raid_read_error_test 00:07:16.602 ************************************ 00:07:16.602 13:17:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:16.602 13:17:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:16.602 13:17:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:07:16.602 00:07:16.602 real 0m4.275s 00:07:16.602 user 0m5.115s 00:07:16.602 sys 0m0.525s 00:07:16.602 13:17:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:16.602 13:17:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.602 13:17:05 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 2 write 00:07:16.602 13:17:05 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:16.602 13:17:05 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:16.602 13:17:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:16.602 ************************************ 00:07:16.602 START TEST raid_write_error_test 00:07:16.602 ************************************ 00:07:16.602 13:17:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 2 write 00:07:16.603 13:17:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:07:16.603 13:17:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:16.603 13:17:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:07:16.603 13:17:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:16.603 13:17:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:16.603 13:17:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:16.603 13:17:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:16.603 13:17:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:16.603 13:17:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:16.603 13:17:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:16.603 13:17:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:16.603 13:17:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:16.603 13:17:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:16.603 13:17:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:16.603 13:17:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:16.603 13:17:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:16.603 13:17:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:16.603 13:17:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:16.603 13:17:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:07:16.603 13:17:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:16.603 13:17:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:16.603 13:17:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:16.603 13:17:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.GpEnm8GUUU 00:07:16.603 13:17:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61528 00:07:16.603 13:17:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61528 00:07:16.603 13:17:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:16.603 13:17:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 61528 ']' 00:07:16.603 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:16.603 13:17:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:16.603 13:17:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:16.603 13:17:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:16.603 13:17:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:16.603 13:17:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.603 [2024-11-17 13:17:05.625416] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:07:16.603 [2024-11-17 13:17:05.625608] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61528 ] 00:07:16.603 [2024-11-17 13:17:05.797560] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.878 [2024-11-17 13:17:05.912585] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.138 [2024-11-17 13:17:06.106365] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:17.138 [2024-11-17 13:17:06.106405] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:17.399 13:17:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:17.399 13:17:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:07:17.399 13:17:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:17.399 13:17:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:17.399 13:17:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.399 13:17:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.399 BaseBdev1_malloc 00:07:17.399 13:17:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.399 13:17:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:17.399 13:17:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.399 13:17:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.399 true 00:07:17.399 13:17:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.399 13:17:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:17.399 13:17:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.399 13:17:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.399 [2024-11-17 13:17:06.512344] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:17.399 [2024-11-17 13:17:06.512436] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:17.399 [2024-11-17 13:17:06.512464] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:17.399 [2024-11-17 13:17:06.512475] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:17.399 [2024-11-17 13:17:06.514570] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:17.399 [2024-11-17 13:17:06.514609] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:17.399 BaseBdev1 00:07:17.399 13:17:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.399 13:17:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:17.399 13:17:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:17.399 13:17:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.399 13:17:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.399 BaseBdev2_malloc 00:07:17.399 13:17:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.399 13:17:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:17.399 13:17:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.399 13:17:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.399 true 00:07:17.399 13:17:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.399 13:17:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:17.399 13:17:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.399 13:17:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.399 [2024-11-17 13:17:06.578328] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:17.399 [2024-11-17 13:17:06.578377] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:17.399 [2024-11-17 13:17:06.578393] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:17.399 [2024-11-17 13:17:06.578403] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:17.399 [2024-11-17 13:17:06.580398] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:17.399 [2024-11-17 13:17:06.580498] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:17.399 BaseBdev2 00:07:17.399 13:17:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.399 13:17:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:17.399 13:17:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.399 13:17:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.399 [2024-11-17 13:17:06.590367] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:17.399 [2024-11-17 13:17:06.592148] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:17.399 [2024-11-17 13:17:06.592397] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:17.399 [2024-11-17 13:17:06.592455] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:17.399 [2024-11-17 13:17:06.592714] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:07:17.399 [2024-11-17 13:17:06.592925] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:17.399 [2024-11-17 13:17:06.592971] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:17.399 [2024-11-17 13:17:06.593191] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:17.399 13:17:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.399 13:17:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:17.399 13:17:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:17.399 13:17:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:17.399 13:17:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:17.399 13:17:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:17.399 13:17:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:17.399 13:17:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:17.399 13:17:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:17.399 13:17:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:17.399 13:17:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:17.399 13:17:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:17.399 13:17:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.399 13:17:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:17.399 13:17:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.399 13:17:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.660 13:17:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:17.660 "name": "raid_bdev1", 00:07:17.660 "uuid": "f9c2aa52-fc84-4473-934d-da2f7b98455e", 00:07:17.660 "strip_size_kb": 64, 00:07:17.660 "state": "online", 00:07:17.660 "raid_level": "raid0", 00:07:17.660 "superblock": true, 00:07:17.660 "num_base_bdevs": 2, 00:07:17.660 "num_base_bdevs_discovered": 2, 00:07:17.660 "num_base_bdevs_operational": 2, 00:07:17.660 "base_bdevs_list": [ 00:07:17.660 { 00:07:17.660 "name": "BaseBdev1", 00:07:17.660 "uuid": "584f6bc7-c52e-537f-be7f-224f912aab8a", 00:07:17.660 "is_configured": true, 00:07:17.660 "data_offset": 2048, 00:07:17.660 "data_size": 63488 00:07:17.660 }, 00:07:17.660 { 00:07:17.660 "name": "BaseBdev2", 00:07:17.660 "uuid": "67d4caeb-ccf8-5825-869a-22f290214389", 00:07:17.660 "is_configured": true, 00:07:17.660 "data_offset": 2048, 00:07:17.660 "data_size": 63488 00:07:17.660 } 00:07:17.660 ] 00:07:17.660 }' 00:07:17.660 13:17:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:17.660 13:17:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.920 13:17:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:17.920 13:17:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:17.920 [2024-11-17 13:17:07.110847] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:07:18.857 13:17:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:07:18.857 13:17:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.857 13:17:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.857 13:17:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.857 13:17:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:18.857 13:17:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:07:18.857 13:17:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:18.857 13:17:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:18.857 13:17:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:18.857 13:17:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:18.857 13:17:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:18.857 13:17:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:18.857 13:17:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:18.857 13:17:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:18.857 13:17:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:18.858 13:17:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:18.858 13:17:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:18.858 13:17:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:18.858 13:17:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:18.858 13:17:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.858 13:17:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.858 13:17:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.117 13:17:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:19.117 "name": "raid_bdev1", 00:07:19.117 "uuid": "f9c2aa52-fc84-4473-934d-da2f7b98455e", 00:07:19.117 "strip_size_kb": 64, 00:07:19.117 "state": "online", 00:07:19.117 "raid_level": "raid0", 00:07:19.117 "superblock": true, 00:07:19.117 "num_base_bdevs": 2, 00:07:19.117 "num_base_bdevs_discovered": 2, 00:07:19.117 "num_base_bdevs_operational": 2, 00:07:19.117 "base_bdevs_list": [ 00:07:19.117 { 00:07:19.117 "name": "BaseBdev1", 00:07:19.117 "uuid": "584f6bc7-c52e-537f-be7f-224f912aab8a", 00:07:19.117 "is_configured": true, 00:07:19.117 "data_offset": 2048, 00:07:19.117 "data_size": 63488 00:07:19.117 }, 00:07:19.117 { 00:07:19.117 "name": "BaseBdev2", 00:07:19.117 "uuid": "67d4caeb-ccf8-5825-869a-22f290214389", 00:07:19.117 "is_configured": true, 00:07:19.117 "data_offset": 2048, 00:07:19.117 "data_size": 63488 00:07:19.117 } 00:07:19.117 ] 00:07:19.117 }' 00:07:19.117 13:17:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:19.117 13:17:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.377 13:17:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:19.377 13:17:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.377 13:17:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.377 [2024-11-17 13:17:08.525422] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:19.377 [2024-11-17 13:17:08.525511] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:19.377 [2024-11-17 13:17:08.528059] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:19.377 { 00:07:19.377 "results": [ 00:07:19.377 { 00:07:19.377 "job": "raid_bdev1", 00:07:19.377 "core_mask": "0x1", 00:07:19.377 "workload": "randrw", 00:07:19.377 "percentage": 50, 00:07:19.377 "status": "finished", 00:07:19.377 "queue_depth": 1, 00:07:19.377 "io_size": 131072, 00:07:19.377 "runtime": 1.415548, 00:07:19.377 "iops": 16585.096372570904, 00:07:19.377 "mibps": 2073.137046571363, 00:07:19.377 "io_failed": 1, 00:07:19.377 "io_timeout": 0, 00:07:19.377 "avg_latency_us": 83.70831107892141, 00:07:19.377 "min_latency_us": 24.482096069868994, 00:07:19.377 "max_latency_us": 1387.989519650655 00:07:19.377 } 00:07:19.377 ], 00:07:19.377 "core_count": 1 00:07:19.377 } 00:07:19.377 [2024-11-17 13:17:08.528141] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:19.377 [2024-11-17 13:17:08.528177] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:19.377 [2024-11-17 13:17:08.528189] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:19.377 13:17:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.377 13:17:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61528 00:07:19.377 13:17:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 61528 ']' 00:07:19.377 13:17:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 61528 00:07:19.377 13:17:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:07:19.377 13:17:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:19.377 13:17:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61528 00:07:19.377 killing process with pid 61528 00:07:19.377 13:17:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:19.377 13:17:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:19.377 13:17:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61528' 00:07:19.377 13:17:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 61528 00:07:19.377 [2024-11-17 13:17:08.574098] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:19.377 13:17:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 61528 00:07:19.636 [2024-11-17 13:17:08.702775] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:21.017 13:17:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.GpEnm8GUUU 00:07:21.017 13:17:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:21.017 13:17:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:21.017 13:17:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:07:21.017 13:17:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:07:21.017 13:17:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:21.017 13:17:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:21.017 13:17:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:07:21.017 ************************************ 00:07:21.017 END TEST raid_write_error_test 00:07:21.017 ************************************ 00:07:21.017 00:07:21.017 real 0m4.306s 00:07:21.017 user 0m5.187s 00:07:21.017 sys 0m0.529s 00:07:21.017 13:17:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:21.017 13:17:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.017 13:17:09 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:07:21.017 13:17:09 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:07:21.017 13:17:09 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:21.017 13:17:09 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:21.017 13:17:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:21.017 ************************************ 00:07:21.017 START TEST raid_state_function_test 00:07:21.017 ************************************ 00:07:21.017 13:17:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 2 false 00:07:21.017 13:17:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:07:21.017 13:17:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:21.017 13:17:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:07:21.017 13:17:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:21.017 13:17:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:21.017 13:17:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:21.017 13:17:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:21.017 13:17:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:21.017 13:17:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:21.017 13:17:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:21.017 13:17:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:21.017 13:17:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:21.017 13:17:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:21.017 13:17:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:21.017 13:17:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:21.017 13:17:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:21.017 13:17:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:21.017 13:17:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:21.017 13:17:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:07:21.017 13:17:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:21.017 13:17:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:21.017 13:17:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:07:21.017 13:17:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:07:21.017 Process raid pid: 61666 00:07:21.017 13:17:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=61666 00:07:21.017 13:17:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:21.017 13:17:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61666' 00:07:21.017 13:17:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 61666 00:07:21.017 13:17:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 61666 ']' 00:07:21.017 13:17:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:21.017 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:21.017 13:17:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:21.017 13:17:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:21.017 13:17:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:21.017 13:17:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.017 [2024-11-17 13:17:09.996382] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:07:21.017 [2024-11-17 13:17:09.996520] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:21.017 [2024-11-17 13:17:10.170563] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.278 [2024-11-17 13:17:10.281520] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.278 [2024-11-17 13:17:10.475091] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:21.278 [2024-11-17 13:17:10.475130] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:21.849 13:17:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:21.849 13:17:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:07:21.849 13:17:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:21.849 13:17:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.849 13:17:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.849 [2024-11-17 13:17:10.819057] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:21.849 [2024-11-17 13:17:10.819179] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:21.849 [2024-11-17 13:17:10.819204] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:21.849 [2024-11-17 13:17:10.819228] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:21.849 13:17:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.849 13:17:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:21.849 13:17:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:21.849 13:17:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:21.849 13:17:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:21.849 13:17:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:21.849 13:17:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:21.849 13:17:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:21.849 13:17:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:21.849 13:17:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:21.849 13:17:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:21.849 13:17:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:21.849 13:17:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:21.849 13:17:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.849 13:17:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.849 13:17:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.849 13:17:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:21.849 "name": "Existed_Raid", 00:07:21.849 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:21.849 "strip_size_kb": 64, 00:07:21.849 "state": "configuring", 00:07:21.849 "raid_level": "concat", 00:07:21.849 "superblock": false, 00:07:21.849 "num_base_bdevs": 2, 00:07:21.849 "num_base_bdevs_discovered": 0, 00:07:21.849 "num_base_bdevs_operational": 2, 00:07:21.849 "base_bdevs_list": [ 00:07:21.849 { 00:07:21.849 "name": "BaseBdev1", 00:07:21.849 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:21.849 "is_configured": false, 00:07:21.849 "data_offset": 0, 00:07:21.849 "data_size": 0 00:07:21.849 }, 00:07:21.849 { 00:07:21.849 "name": "BaseBdev2", 00:07:21.849 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:21.849 "is_configured": false, 00:07:21.849 "data_offset": 0, 00:07:21.849 "data_size": 0 00:07:21.849 } 00:07:21.849 ] 00:07:21.849 }' 00:07:21.849 13:17:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:21.849 13:17:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.109 13:17:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:22.109 13:17:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.109 13:17:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.109 [2024-11-17 13:17:11.310191] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:22.109 [2024-11-17 13:17:11.310291] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:22.109 13:17:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.109 13:17:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:22.109 13:17:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.109 13:17:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.109 [2024-11-17 13:17:11.318164] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:22.109 [2024-11-17 13:17:11.318258] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:22.109 [2024-11-17 13:17:11.318287] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:22.109 [2024-11-17 13:17:11.318312] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:22.109 13:17:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.109 13:17:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:22.109 13:17:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.109 13:17:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.370 [2024-11-17 13:17:11.360748] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:22.370 BaseBdev1 00:07:22.370 13:17:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.370 13:17:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:22.370 13:17:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:22.370 13:17:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:22.370 13:17:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:22.370 13:17:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:22.370 13:17:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:22.370 13:17:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:22.370 13:17:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.370 13:17:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.370 13:17:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.370 13:17:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:22.370 13:17:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.370 13:17:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.370 [ 00:07:22.370 { 00:07:22.370 "name": "BaseBdev1", 00:07:22.370 "aliases": [ 00:07:22.370 "31ae6607-85b2-44ab-86af-74cb888d5c8f" 00:07:22.370 ], 00:07:22.370 "product_name": "Malloc disk", 00:07:22.370 "block_size": 512, 00:07:22.370 "num_blocks": 65536, 00:07:22.370 "uuid": "31ae6607-85b2-44ab-86af-74cb888d5c8f", 00:07:22.370 "assigned_rate_limits": { 00:07:22.370 "rw_ios_per_sec": 0, 00:07:22.370 "rw_mbytes_per_sec": 0, 00:07:22.370 "r_mbytes_per_sec": 0, 00:07:22.370 "w_mbytes_per_sec": 0 00:07:22.370 }, 00:07:22.370 "claimed": true, 00:07:22.370 "claim_type": "exclusive_write", 00:07:22.370 "zoned": false, 00:07:22.370 "supported_io_types": { 00:07:22.370 "read": true, 00:07:22.370 "write": true, 00:07:22.370 "unmap": true, 00:07:22.370 "flush": true, 00:07:22.370 "reset": true, 00:07:22.370 "nvme_admin": false, 00:07:22.370 "nvme_io": false, 00:07:22.370 "nvme_io_md": false, 00:07:22.370 "write_zeroes": true, 00:07:22.370 "zcopy": true, 00:07:22.370 "get_zone_info": false, 00:07:22.370 "zone_management": false, 00:07:22.370 "zone_append": false, 00:07:22.370 "compare": false, 00:07:22.370 "compare_and_write": false, 00:07:22.370 "abort": true, 00:07:22.370 "seek_hole": false, 00:07:22.370 "seek_data": false, 00:07:22.370 "copy": true, 00:07:22.370 "nvme_iov_md": false 00:07:22.370 }, 00:07:22.370 "memory_domains": [ 00:07:22.370 { 00:07:22.370 "dma_device_id": "system", 00:07:22.370 "dma_device_type": 1 00:07:22.370 }, 00:07:22.370 { 00:07:22.370 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:22.370 "dma_device_type": 2 00:07:22.370 } 00:07:22.370 ], 00:07:22.370 "driver_specific": {} 00:07:22.370 } 00:07:22.370 ] 00:07:22.370 13:17:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.370 13:17:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:22.370 13:17:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:22.370 13:17:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:22.370 13:17:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:22.370 13:17:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:22.370 13:17:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:22.370 13:17:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:22.370 13:17:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:22.370 13:17:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:22.370 13:17:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:22.370 13:17:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:22.370 13:17:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:22.370 13:17:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:22.370 13:17:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.370 13:17:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.370 13:17:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.370 13:17:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:22.370 "name": "Existed_Raid", 00:07:22.370 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:22.370 "strip_size_kb": 64, 00:07:22.370 "state": "configuring", 00:07:22.370 "raid_level": "concat", 00:07:22.370 "superblock": false, 00:07:22.370 "num_base_bdevs": 2, 00:07:22.370 "num_base_bdevs_discovered": 1, 00:07:22.370 "num_base_bdevs_operational": 2, 00:07:22.370 "base_bdevs_list": [ 00:07:22.370 { 00:07:22.370 "name": "BaseBdev1", 00:07:22.370 "uuid": "31ae6607-85b2-44ab-86af-74cb888d5c8f", 00:07:22.370 "is_configured": true, 00:07:22.370 "data_offset": 0, 00:07:22.370 "data_size": 65536 00:07:22.370 }, 00:07:22.370 { 00:07:22.370 "name": "BaseBdev2", 00:07:22.370 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:22.370 "is_configured": false, 00:07:22.370 "data_offset": 0, 00:07:22.370 "data_size": 0 00:07:22.370 } 00:07:22.370 ] 00:07:22.370 }' 00:07:22.370 13:17:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:22.370 13:17:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.631 13:17:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:22.631 13:17:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.631 13:17:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.631 [2024-11-17 13:17:11.768140] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:22.631 [2024-11-17 13:17:11.768197] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:22.631 13:17:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.631 13:17:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:22.631 13:17:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.631 13:17:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.631 [2024-11-17 13:17:11.780150] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:22.631 [2024-11-17 13:17:11.781969] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:22.631 [2024-11-17 13:17:11.782008] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:22.631 13:17:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.631 13:17:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:22.631 13:17:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:22.631 13:17:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:22.631 13:17:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:22.631 13:17:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:22.631 13:17:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:22.631 13:17:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:22.631 13:17:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:22.631 13:17:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:22.631 13:17:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:22.631 13:17:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:22.631 13:17:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:22.631 13:17:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:22.631 13:17:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.631 13:17:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.631 13:17:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:22.631 13:17:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.631 13:17:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:22.631 "name": "Existed_Raid", 00:07:22.631 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:22.631 "strip_size_kb": 64, 00:07:22.631 "state": "configuring", 00:07:22.631 "raid_level": "concat", 00:07:22.631 "superblock": false, 00:07:22.631 "num_base_bdevs": 2, 00:07:22.631 "num_base_bdevs_discovered": 1, 00:07:22.631 "num_base_bdevs_operational": 2, 00:07:22.631 "base_bdevs_list": [ 00:07:22.631 { 00:07:22.631 "name": "BaseBdev1", 00:07:22.631 "uuid": "31ae6607-85b2-44ab-86af-74cb888d5c8f", 00:07:22.631 "is_configured": true, 00:07:22.631 "data_offset": 0, 00:07:22.631 "data_size": 65536 00:07:22.631 }, 00:07:22.631 { 00:07:22.631 "name": "BaseBdev2", 00:07:22.631 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:22.631 "is_configured": false, 00:07:22.631 "data_offset": 0, 00:07:22.631 "data_size": 0 00:07:22.631 } 00:07:22.631 ] 00:07:22.631 }' 00:07:22.631 13:17:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:22.631 13:17:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.201 13:17:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:23.201 13:17:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.201 13:17:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.201 [2024-11-17 13:17:12.265617] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:23.201 [2024-11-17 13:17:12.265741] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:23.201 [2024-11-17 13:17:12.265755] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:23.201 [2024-11-17 13:17:12.266119] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:23.201 [2024-11-17 13:17:12.266314] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:23.201 [2024-11-17 13:17:12.266331] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:23.201 [2024-11-17 13:17:12.266616] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:23.201 BaseBdev2 00:07:23.201 13:17:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.201 13:17:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:23.201 13:17:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:23.201 13:17:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:23.201 13:17:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:23.201 13:17:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:23.201 13:17:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:23.201 13:17:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:23.201 13:17:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.201 13:17:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.201 13:17:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.201 13:17:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:23.201 13:17:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.201 13:17:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.201 [ 00:07:23.201 { 00:07:23.201 "name": "BaseBdev2", 00:07:23.201 "aliases": [ 00:07:23.201 "2060a670-7834-4f2a-800f-5ccd94f00413" 00:07:23.201 ], 00:07:23.201 "product_name": "Malloc disk", 00:07:23.201 "block_size": 512, 00:07:23.201 "num_blocks": 65536, 00:07:23.201 "uuid": "2060a670-7834-4f2a-800f-5ccd94f00413", 00:07:23.201 "assigned_rate_limits": { 00:07:23.201 "rw_ios_per_sec": 0, 00:07:23.201 "rw_mbytes_per_sec": 0, 00:07:23.201 "r_mbytes_per_sec": 0, 00:07:23.201 "w_mbytes_per_sec": 0 00:07:23.201 }, 00:07:23.201 "claimed": true, 00:07:23.201 "claim_type": "exclusive_write", 00:07:23.201 "zoned": false, 00:07:23.201 "supported_io_types": { 00:07:23.201 "read": true, 00:07:23.201 "write": true, 00:07:23.201 "unmap": true, 00:07:23.201 "flush": true, 00:07:23.201 "reset": true, 00:07:23.201 "nvme_admin": false, 00:07:23.201 "nvme_io": false, 00:07:23.201 "nvme_io_md": false, 00:07:23.201 "write_zeroes": true, 00:07:23.201 "zcopy": true, 00:07:23.201 "get_zone_info": false, 00:07:23.201 "zone_management": false, 00:07:23.201 "zone_append": false, 00:07:23.201 "compare": false, 00:07:23.201 "compare_and_write": false, 00:07:23.201 "abort": true, 00:07:23.201 "seek_hole": false, 00:07:23.201 "seek_data": false, 00:07:23.201 "copy": true, 00:07:23.201 "nvme_iov_md": false 00:07:23.201 }, 00:07:23.201 "memory_domains": [ 00:07:23.201 { 00:07:23.201 "dma_device_id": "system", 00:07:23.201 "dma_device_type": 1 00:07:23.201 }, 00:07:23.201 { 00:07:23.201 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:23.201 "dma_device_type": 2 00:07:23.201 } 00:07:23.201 ], 00:07:23.201 "driver_specific": {} 00:07:23.201 } 00:07:23.201 ] 00:07:23.201 13:17:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.201 13:17:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:23.201 13:17:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:23.201 13:17:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:23.201 13:17:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:07:23.201 13:17:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:23.201 13:17:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:23.201 13:17:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:23.201 13:17:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:23.201 13:17:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:23.201 13:17:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:23.201 13:17:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:23.201 13:17:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:23.201 13:17:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:23.201 13:17:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:23.201 13:17:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:23.201 13:17:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.201 13:17:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.201 13:17:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.201 13:17:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:23.201 "name": "Existed_Raid", 00:07:23.201 "uuid": "13faddd5-1f34-4b10-9a44-9ccb2f2d58cf", 00:07:23.201 "strip_size_kb": 64, 00:07:23.201 "state": "online", 00:07:23.201 "raid_level": "concat", 00:07:23.201 "superblock": false, 00:07:23.201 "num_base_bdevs": 2, 00:07:23.201 "num_base_bdevs_discovered": 2, 00:07:23.201 "num_base_bdevs_operational": 2, 00:07:23.201 "base_bdevs_list": [ 00:07:23.201 { 00:07:23.201 "name": "BaseBdev1", 00:07:23.201 "uuid": "31ae6607-85b2-44ab-86af-74cb888d5c8f", 00:07:23.201 "is_configured": true, 00:07:23.201 "data_offset": 0, 00:07:23.201 "data_size": 65536 00:07:23.201 }, 00:07:23.201 { 00:07:23.201 "name": "BaseBdev2", 00:07:23.201 "uuid": "2060a670-7834-4f2a-800f-5ccd94f00413", 00:07:23.201 "is_configured": true, 00:07:23.201 "data_offset": 0, 00:07:23.201 "data_size": 65536 00:07:23.201 } 00:07:23.201 ] 00:07:23.202 }' 00:07:23.202 13:17:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:23.202 13:17:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.771 13:17:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:23.771 13:17:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:23.771 13:17:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:23.771 13:17:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:23.771 13:17:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:23.771 13:17:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:23.771 13:17:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:23.771 13:17:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.771 13:17:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.771 13:17:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:23.771 [2024-11-17 13:17:12.741126] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:23.771 13:17:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.771 13:17:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:23.771 "name": "Existed_Raid", 00:07:23.771 "aliases": [ 00:07:23.771 "13faddd5-1f34-4b10-9a44-9ccb2f2d58cf" 00:07:23.771 ], 00:07:23.771 "product_name": "Raid Volume", 00:07:23.771 "block_size": 512, 00:07:23.771 "num_blocks": 131072, 00:07:23.771 "uuid": "13faddd5-1f34-4b10-9a44-9ccb2f2d58cf", 00:07:23.771 "assigned_rate_limits": { 00:07:23.771 "rw_ios_per_sec": 0, 00:07:23.771 "rw_mbytes_per_sec": 0, 00:07:23.771 "r_mbytes_per_sec": 0, 00:07:23.771 "w_mbytes_per_sec": 0 00:07:23.771 }, 00:07:23.771 "claimed": false, 00:07:23.771 "zoned": false, 00:07:23.771 "supported_io_types": { 00:07:23.771 "read": true, 00:07:23.771 "write": true, 00:07:23.771 "unmap": true, 00:07:23.771 "flush": true, 00:07:23.771 "reset": true, 00:07:23.771 "nvme_admin": false, 00:07:23.771 "nvme_io": false, 00:07:23.771 "nvme_io_md": false, 00:07:23.771 "write_zeroes": true, 00:07:23.771 "zcopy": false, 00:07:23.771 "get_zone_info": false, 00:07:23.771 "zone_management": false, 00:07:23.771 "zone_append": false, 00:07:23.771 "compare": false, 00:07:23.771 "compare_and_write": false, 00:07:23.771 "abort": false, 00:07:23.771 "seek_hole": false, 00:07:23.771 "seek_data": false, 00:07:23.771 "copy": false, 00:07:23.771 "nvme_iov_md": false 00:07:23.771 }, 00:07:23.771 "memory_domains": [ 00:07:23.771 { 00:07:23.771 "dma_device_id": "system", 00:07:23.771 "dma_device_type": 1 00:07:23.771 }, 00:07:23.771 { 00:07:23.771 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:23.771 "dma_device_type": 2 00:07:23.771 }, 00:07:23.771 { 00:07:23.771 "dma_device_id": "system", 00:07:23.772 "dma_device_type": 1 00:07:23.772 }, 00:07:23.772 { 00:07:23.772 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:23.772 "dma_device_type": 2 00:07:23.772 } 00:07:23.772 ], 00:07:23.772 "driver_specific": { 00:07:23.772 "raid": { 00:07:23.772 "uuid": "13faddd5-1f34-4b10-9a44-9ccb2f2d58cf", 00:07:23.772 "strip_size_kb": 64, 00:07:23.772 "state": "online", 00:07:23.772 "raid_level": "concat", 00:07:23.772 "superblock": false, 00:07:23.772 "num_base_bdevs": 2, 00:07:23.772 "num_base_bdevs_discovered": 2, 00:07:23.772 "num_base_bdevs_operational": 2, 00:07:23.772 "base_bdevs_list": [ 00:07:23.772 { 00:07:23.772 "name": "BaseBdev1", 00:07:23.772 "uuid": "31ae6607-85b2-44ab-86af-74cb888d5c8f", 00:07:23.772 "is_configured": true, 00:07:23.772 "data_offset": 0, 00:07:23.772 "data_size": 65536 00:07:23.772 }, 00:07:23.772 { 00:07:23.772 "name": "BaseBdev2", 00:07:23.772 "uuid": "2060a670-7834-4f2a-800f-5ccd94f00413", 00:07:23.772 "is_configured": true, 00:07:23.772 "data_offset": 0, 00:07:23.772 "data_size": 65536 00:07:23.772 } 00:07:23.772 ] 00:07:23.772 } 00:07:23.772 } 00:07:23.772 }' 00:07:23.772 13:17:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:23.772 13:17:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:23.772 BaseBdev2' 00:07:23.772 13:17:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:23.772 13:17:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:23.772 13:17:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:23.772 13:17:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:23.772 13:17:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.772 13:17:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.772 13:17:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:23.772 13:17:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.772 13:17:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:23.772 13:17:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:23.772 13:17:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:23.772 13:17:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:23.772 13:17:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:23.772 13:17:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.772 13:17:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.772 13:17:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.772 13:17:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:23.772 13:17:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:23.772 13:17:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:23.772 13:17:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.772 13:17:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.772 [2024-11-17 13:17:12.964523] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:23.772 [2024-11-17 13:17:12.964553] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:23.772 [2024-11-17 13:17:12.964602] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:24.032 13:17:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.032 13:17:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:24.032 13:17:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:07:24.032 13:17:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:24.032 13:17:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:24.032 13:17:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:24.032 13:17:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:07:24.032 13:17:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:24.032 13:17:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:24.032 13:17:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:24.032 13:17:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:24.032 13:17:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:24.032 13:17:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:24.032 13:17:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:24.032 13:17:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:24.032 13:17:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:24.032 13:17:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:24.032 13:17:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:24.032 13:17:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.032 13:17:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.032 13:17:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.032 13:17:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:24.032 "name": "Existed_Raid", 00:07:24.032 "uuid": "13faddd5-1f34-4b10-9a44-9ccb2f2d58cf", 00:07:24.032 "strip_size_kb": 64, 00:07:24.032 "state": "offline", 00:07:24.032 "raid_level": "concat", 00:07:24.032 "superblock": false, 00:07:24.032 "num_base_bdevs": 2, 00:07:24.033 "num_base_bdevs_discovered": 1, 00:07:24.033 "num_base_bdevs_operational": 1, 00:07:24.033 "base_bdevs_list": [ 00:07:24.033 { 00:07:24.033 "name": null, 00:07:24.033 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:24.033 "is_configured": false, 00:07:24.033 "data_offset": 0, 00:07:24.033 "data_size": 65536 00:07:24.033 }, 00:07:24.033 { 00:07:24.033 "name": "BaseBdev2", 00:07:24.033 "uuid": "2060a670-7834-4f2a-800f-5ccd94f00413", 00:07:24.033 "is_configured": true, 00:07:24.033 "data_offset": 0, 00:07:24.033 "data_size": 65536 00:07:24.033 } 00:07:24.033 ] 00:07:24.033 }' 00:07:24.033 13:17:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:24.033 13:17:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.293 13:17:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:24.293 13:17:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:24.293 13:17:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:24.293 13:17:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:24.293 13:17:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.293 13:17:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.293 13:17:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.293 13:17:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:24.553 13:17:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:24.553 13:17:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:24.553 13:17:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.553 13:17:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.553 [2024-11-17 13:17:13.520979] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:24.553 [2024-11-17 13:17:13.521083] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:24.553 13:17:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.553 13:17:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:24.553 13:17:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:24.553 13:17:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:24.553 13:17:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.553 13:17:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:24.553 13:17:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.553 13:17:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.553 13:17:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:24.553 13:17:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:24.553 13:17:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:24.553 13:17:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 61666 00:07:24.553 13:17:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 61666 ']' 00:07:24.553 13:17:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 61666 00:07:24.553 13:17:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:07:24.553 13:17:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:24.553 13:17:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61666 00:07:24.553 killing process with pid 61666 00:07:24.553 13:17:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:24.553 13:17:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:24.553 13:17:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61666' 00:07:24.553 13:17:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 61666 00:07:24.553 [2024-11-17 13:17:13.699596] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:24.553 13:17:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 61666 00:07:24.553 [2024-11-17 13:17:13.715996] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:25.934 13:17:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:07:25.934 00:07:25.934 real 0m4.902s 00:07:25.934 user 0m7.069s 00:07:25.934 sys 0m0.803s 00:07:25.934 ************************************ 00:07:25.934 END TEST raid_state_function_test 00:07:25.934 ************************************ 00:07:25.934 13:17:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:25.934 13:17:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.934 13:17:14 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:07:25.934 13:17:14 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:25.934 13:17:14 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:25.934 13:17:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:25.934 ************************************ 00:07:25.934 START TEST raid_state_function_test_sb 00:07:25.934 ************************************ 00:07:25.934 13:17:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 2 true 00:07:25.934 13:17:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:07:25.934 13:17:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:25.934 13:17:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:07:25.934 13:17:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:25.934 13:17:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:25.934 13:17:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:25.934 13:17:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:25.934 13:17:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:25.934 13:17:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:25.934 13:17:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:25.934 13:17:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:25.934 13:17:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:25.934 13:17:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:25.934 13:17:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:25.934 13:17:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:25.934 13:17:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:25.934 13:17:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:25.934 13:17:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:25.934 13:17:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:07:25.934 13:17:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:25.934 13:17:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:25.934 13:17:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:07:25.934 13:17:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:07:25.934 Process raid pid: 61918 00:07:25.934 13:17:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=61918 00:07:25.934 13:17:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:25.934 13:17:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61918' 00:07:25.934 13:17:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 61918 00:07:25.934 13:17:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 61918 ']' 00:07:25.934 13:17:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:25.934 13:17:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:25.934 13:17:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:25.934 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:25.934 13:17:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:25.935 13:17:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:25.935 [2024-11-17 13:17:14.968775] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:07:25.935 [2024-11-17 13:17:14.968990] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:25.935 [2024-11-17 13:17:15.125336] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.194 [2024-11-17 13:17:15.238864] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.456 [2024-11-17 13:17:15.442928] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:26.456 [2024-11-17 13:17:15.443059] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:26.715 13:17:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:26.715 13:17:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:07:26.715 13:17:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:26.715 13:17:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.715 13:17:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:26.715 [2024-11-17 13:17:15.807411] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:26.715 [2024-11-17 13:17:15.807529] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:26.715 [2024-11-17 13:17:15.807559] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:26.715 [2024-11-17 13:17:15.807582] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:26.715 13:17:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.715 13:17:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:26.715 13:17:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:26.715 13:17:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:26.715 13:17:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:26.715 13:17:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:26.715 13:17:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:26.715 13:17:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:26.716 13:17:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:26.716 13:17:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:26.716 13:17:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:26.716 13:17:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:26.716 13:17:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:26.716 13:17:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.716 13:17:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:26.716 13:17:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.716 13:17:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:26.716 "name": "Existed_Raid", 00:07:26.716 "uuid": "7fc3f4d8-ab85-4bd9-a214-da193bbc358c", 00:07:26.716 "strip_size_kb": 64, 00:07:26.716 "state": "configuring", 00:07:26.716 "raid_level": "concat", 00:07:26.716 "superblock": true, 00:07:26.716 "num_base_bdevs": 2, 00:07:26.716 "num_base_bdevs_discovered": 0, 00:07:26.716 "num_base_bdevs_operational": 2, 00:07:26.716 "base_bdevs_list": [ 00:07:26.716 { 00:07:26.716 "name": "BaseBdev1", 00:07:26.716 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:26.716 "is_configured": false, 00:07:26.716 "data_offset": 0, 00:07:26.716 "data_size": 0 00:07:26.716 }, 00:07:26.716 { 00:07:26.716 "name": "BaseBdev2", 00:07:26.716 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:26.716 "is_configured": false, 00:07:26.716 "data_offset": 0, 00:07:26.716 "data_size": 0 00:07:26.716 } 00:07:26.716 ] 00:07:26.716 }' 00:07:26.716 13:17:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:26.716 13:17:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:27.285 13:17:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:27.285 13:17:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.285 13:17:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:27.285 [2024-11-17 13:17:16.278519] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:27.285 [2024-11-17 13:17:16.278556] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:27.285 13:17:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.285 13:17:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:27.285 13:17:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.285 13:17:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:27.285 [2024-11-17 13:17:16.290496] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:27.285 [2024-11-17 13:17:16.290573] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:27.285 [2024-11-17 13:17:16.290586] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:27.285 [2024-11-17 13:17:16.290613] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:27.285 13:17:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.285 13:17:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:27.285 13:17:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.285 13:17:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:27.285 [2024-11-17 13:17:16.337221] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:27.285 BaseBdev1 00:07:27.285 13:17:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.285 13:17:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:27.285 13:17:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:27.285 13:17:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:27.285 13:17:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:27.285 13:17:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:27.285 13:17:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:27.285 13:17:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:27.285 13:17:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.285 13:17:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:27.285 13:17:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.285 13:17:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:27.285 13:17:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.285 13:17:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:27.285 [ 00:07:27.285 { 00:07:27.285 "name": "BaseBdev1", 00:07:27.285 "aliases": [ 00:07:27.285 "ecd14eb3-0c47-4b28-93b0-ed71c0fd9c48" 00:07:27.285 ], 00:07:27.285 "product_name": "Malloc disk", 00:07:27.285 "block_size": 512, 00:07:27.285 "num_blocks": 65536, 00:07:27.285 "uuid": "ecd14eb3-0c47-4b28-93b0-ed71c0fd9c48", 00:07:27.285 "assigned_rate_limits": { 00:07:27.285 "rw_ios_per_sec": 0, 00:07:27.285 "rw_mbytes_per_sec": 0, 00:07:27.285 "r_mbytes_per_sec": 0, 00:07:27.285 "w_mbytes_per_sec": 0 00:07:27.285 }, 00:07:27.285 "claimed": true, 00:07:27.285 "claim_type": "exclusive_write", 00:07:27.285 "zoned": false, 00:07:27.285 "supported_io_types": { 00:07:27.285 "read": true, 00:07:27.285 "write": true, 00:07:27.285 "unmap": true, 00:07:27.285 "flush": true, 00:07:27.285 "reset": true, 00:07:27.285 "nvme_admin": false, 00:07:27.285 "nvme_io": false, 00:07:27.285 "nvme_io_md": false, 00:07:27.285 "write_zeroes": true, 00:07:27.285 "zcopy": true, 00:07:27.285 "get_zone_info": false, 00:07:27.285 "zone_management": false, 00:07:27.285 "zone_append": false, 00:07:27.285 "compare": false, 00:07:27.285 "compare_and_write": false, 00:07:27.285 "abort": true, 00:07:27.285 "seek_hole": false, 00:07:27.285 "seek_data": false, 00:07:27.285 "copy": true, 00:07:27.285 "nvme_iov_md": false 00:07:27.285 }, 00:07:27.285 "memory_domains": [ 00:07:27.285 { 00:07:27.285 "dma_device_id": "system", 00:07:27.285 "dma_device_type": 1 00:07:27.285 }, 00:07:27.285 { 00:07:27.285 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:27.285 "dma_device_type": 2 00:07:27.285 } 00:07:27.285 ], 00:07:27.285 "driver_specific": {} 00:07:27.285 } 00:07:27.285 ] 00:07:27.285 13:17:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.285 13:17:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:27.285 13:17:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:27.285 13:17:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:27.285 13:17:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:27.285 13:17:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:27.285 13:17:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:27.285 13:17:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:27.285 13:17:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:27.285 13:17:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:27.285 13:17:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:27.285 13:17:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:27.285 13:17:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:27.285 13:17:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:27.285 13:17:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.285 13:17:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:27.285 13:17:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.285 13:17:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:27.285 "name": "Existed_Raid", 00:07:27.285 "uuid": "d5b13975-73d6-49e9-8702-28fb84c85f7a", 00:07:27.285 "strip_size_kb": 64, 00:07:27.285 "state": "configuring", 00:07:27.286 "raid_level": "concat", 00:07:27.286 "superblock": true, 00:07:27.286 "num_base_bdevs": 2, 00:07:27.286 "num_base_bdevs_discovered": 1, 00:07:27.286 "num_base_bdevs_operational": 2, 00:07:27.286 "base_bdevs_list": [ 00:07:27.286 { 00:07:27.286 "name": "BaseBdev1", 00:07:27.286 "uuid": "ecd14eb3-0c47-4b28-93b0-ed71c0fd9c48", 00:07:27.286 "is_configured": true, 00:07:27.286 "data_offset": 2048, 00:07:27.286 "data_size": 63488 00:07:27.286 }, 00:07:27.286 { 00:07:27.286 "name": "BaseBdev2", 00:07:27.286 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:27.286 "is_configured": false, 00:07:27.286 "data_offset": 0, 00:07:27.286 "data_size": 0 00:07:27.286 } 00:07:27.286 ] 00:07:27.286 }' 00:07:27.286 13:17:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:27.286 13:17:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:27.854 13:17:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:27.854 13:17:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.854 13:17:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:27.854 [2024-11-17 13:17:16.832490] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:27.854 [2024-11-17 13:17:16.832548] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:27.854 13:17:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.854 13:17:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:27.854 13:17:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.854 13:17:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:27.854 [2024-11-17 13:17:16.844492] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:27.854 [2024-11-17 13:17:16.846338] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:27.854 [2024-11-17 13:17:16.846379] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:27.854 13:17:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.854 13:17:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:27.854 13:17:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:27.854 13:17:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:27.854 13:17:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:27.854 13:17:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:27.854 13:17:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:27.854 13:17:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:27.854 13:17:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:27.854 13:17:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:27.854 13:17:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:27.854 13:17:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:27.854 13:17:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:27.854 13:17:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:27.854 13:17:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:27.854 13:17:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.854 13:17:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:27.854 13:17:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.854 13:17:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:27.854 "name": "Existed_Raid", 00:07:27.854 "uuid": "8c1a35e0-8137-41ec-ad96-5196b37cce9e", 00:07:27.854 "strip_size_kb": 64, 00:07:27.854 "state": "configuring", 00:07:27.854 "raid_level": "concat", 00:07:27.854 "superblock": true, 00:07:27.854 "num_base_bdevs": 2, 00:07:27.854 "num_base_bdevs_discovered": 1, 00:07:27.854 "num_base_bdevs_operational": 2, 00:07:27.854 "base_bdevs_list": [ 00:07:27.854 { 00:07:27.854 "name": "BaseBdev1", 00:07:27.854 "uuid": "ecd14eb3-0c47-4b28-93b0-ed71c0fd9c48", 00:07:27.854 "is_configured": true, 00:07:27.854 "data_offset": 2048, 00:07:27.854 "data_size": 63488 00:07:27.854 }, 00:07:27.854 { 00:07:27.854 "name": "BaseBdev2", 00:07:27.854 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:27.854 "is_configured": false, 00:07:27.854 "data_offset": 0, 00:07:27.854 "data_size": 0 00:07:27.854 } 00:07:27.854 ] 00:07:27.854 }' 00:07:27.854 13:17:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:27.854 13:17:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:28.114 13:17:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:28.114 13:17:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.114 13:17:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:28.114 [2024-11-17 13:17:17.276183] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:28.114 [2024-11-17 13:17:17.276456] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:28.114 [2024-11-17 13:17:17.276497] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:28.114 BaseBdev2 00:07:28.114 [2024-11-17 13:17:17.276828] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:28.114 [2024-11-17 13:17:17.276983] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:28.114 [2024-11-17 13:17:17.276996] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:28.114 [2024-11-17 13:17:17.277139] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:28.114 13:17:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.114 13:17:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:28.114 13:17:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:28.114 13:17:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:28.114 13:17:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:28.114 13:17:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:28.114 13:17:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:28.114 13:17:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:28.114 13:17:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.114 13:17:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:28.114 13:17:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.114 13:17:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:28.114 13:17:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.114 13:17:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:28.114 [ 00:07:28.114 { 00:07:28.114 "name": "BaseBdev2", 00:07:28.114 "aliases": [ 00:07:28.114 "7aa06a2c-24d4-4387-86e6-794560aa1259" 00:07:28.114 ], 00:07:28.114 "product_name": "Malloc disk", 00:07:28.114 "block_size": 512, 00:07:28.114 "num_blocks": 65536, 00:07:28.114 "uuid": "7aa06a2c-24d4-4387-86e6-794560aa1259", 00:07:28.114 "assigned_rate_limits": { 00:07:28.114 "rw_ios_per_sec": 0, 00:07:28.114 "rw_mbytes_per_sec": 0, 00:07:28.114 "r_mbytes_per_sec": 0, 00:07:28.114 "w_mbytes_per_sec": 0 00:07:28.114 }, 00:07:28.114 "claimed": true, 00:07:28.114 "claim_type": "exclusive_write", 00:07:28.114 "zoned": false, 00:07:28.114 "supported_io_types": { 00:07:28.114 "read": true, 00:07:28.114 "write": true, 00:07:28.114 "unmap": true, 00:07:28.114 "flush": true, 00:07:28.114 "reset": true, 00:07:28.114 "nvme_admin": false, 00:07:28.114 "nvme_io": false, 00:07:28.114 "nvme_io_md": false, 00:07:28.114 "write_zeroes": true, 00:07:28.114 "zcopy": true, 00:07:28.114 "get_zone_info": false, 00:07:28.114 "zone_management": false, 00:07:28.114 "zone_append": false, 00:07:28.114 "compare": false, 00:07:28.114 "compare_and_write": false, 00:07:28.114 "abort": true, 00:07:28.114 "seek_hole": false, 00:07:28.114 "seek_data": false, 00:07:28.114 "copy": true, 00:07:28.114 "nvme_iov_md": false 00:07:28.114 }, 00:07:28.114 "memory_domains": [ 00:07:28.114 { 00:07:28.114 "dma_device_id": "system", 00:07:28.114 "dma_device_type": 1 00:07:28.114 }, 00:07:28.114 { 00:07:28.114 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:28.114 "dma_device_type": 2 00:07:28.114 } 00:07:28.114 ], 00:07:28.114 "driver_specific": {} 00:07:28.114 } 00:07:28.114 ] 00:07:28.114 13:17:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.114 13:17:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:28.114 13:17:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:28.114 13:17:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:28.114 13:17:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:07:28.114 13:17:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:28.114 13:17:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:28.114 13:17:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:28.114 13:17:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:28.114 13:17:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:28.114 13:17:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:28.114 13:17:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:28.115 13:17:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:28.115 13:17:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:28.115 13:17:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:28.115 13:17:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:28.115 13:17:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.115 13:17:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:28.374 13:17:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.374 13:17:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:28.374 "name": "Existed_Raid", 00:07:28.374 "uuid": "8c1a35e0-8137-41ec-ad96-5196b37cce9e", 00:07:28.374 "strip_size_kb": 64, 00:07:28.374 "state": "online", 00:07:28.374 "raid_level": "concat", 00:07:28.374 "superblock": true, 00:07:28.374 "num_base_bdevs": 2, 00:07:28.374 "num_base_bdevs_discovered": 2, 00:07:28.374 "num_base_bdevs_operational": 2, 00:07:28.374 "base_bdevs_list": [ 00:07:28.374 { 00:07:28.374 "name": "BaseBdev1", 00:07:28.374 "uuid": "ecd14eb3-0c47-4b28-93b0-ed71c0fd9c48", 00:07:28.374 "is_configured": true, 00:07:28.374 "data_offset": 2048, 00:07:28.374 "data_size": 63488 00:07:28.374 }, 00:07:28.374 { 00:07:28.374 "name": "BaseBdev2", 00:07:28.374 "uuid": "7aa06a2c-24d4-4387-86e6-794560aa1259", 00:07:28.374 "is_configured": true, 00:07:28.374 "data_offset": 2048, 00:07:28.374 "data_size": 63488 00:07:28.374 } 00:07:28.374 ] 00:07:28.374 }' 00:07:28.374 13:17:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:28.374 13:17:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:28.634 13:17:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:28.634 13:17:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:28.634 13:17:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:28.634 13:17:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:28.634 13:17:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:07:28.634 13:17:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:28.634 13:17:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:28.634 13:17:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.634 13:17:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:28.634 13:17:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:28.634 [2024-11-17 13:17:17.755727] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:28.634 13:17:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.634 13:17:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:28.634 "name": "Existed_Raid", 00:07:28.634 "aliases": [ 00:07:28.634 "8c1a35e0-8137-41ec-ad96-5196b37cce9e" 00:07:28.634 ], 00:07:28.634 "product_name": "Raid Volume", 00:07:28.634 "block_size": 512, 00:07:28.634 "num_blocks": 126976, 00:07:28.634 "uuid": "8c1a35e0-8137-41ec-ad96-5196b37cce9e", 00:07:28.634 "assigned_rate_limits": { 00:07:28.634 "rw_ios_per_sec": 0, 00:07:28.634 "rw_mbytes_per_sec": 0, 00:07:28.634 "r_mbytes_per_sec": 0, 00:07:28.634 "w_mbytes_per_sec": 0 00:07:28.634 }, 00:07:28.634 "claimed": false, 00:07:28.634 "zoned": false, 00:07:28.634 "supported_io_types": { 00:07:28.634 "read": true, 00:07:28.634 "write": true, 00:07:28.634 "unmap": true, 00:07:28.634 "flush": true, 00:07:28.634 "reset": true, 00:07:28.634 "nvme_admin": false, 00:07:28.634 "nvme_io": false, 00:07:28.634 "nvme_io_md": false, 00:07:28.634 "write_zeroes": true, 00:07:28.634 "zcopy": false, 00:07:28.634 "get_zone_info": false, 00:07:28.634 "zone_management": false, 00:07:28.634 "zone_append": false, 00:07:28.634 "compare": false, 00:07:28.634 "compare_and_write": false, 00:07:28.634 "abort": false, 00:07:28.634 "seek_hole": false, 00:07:28.634 "seek_data": false, 00:07:28.634 "copy": false, 00:07:28.634 "nvme_iov_md": false 00:07:28.634 }, 00:07:28.634 "memory_domains": [ 00:07:28.634 { 00:07:28.634 "dma_device_id": "system", 00:07:28.634 "dma_device_type": 1 00:07:28.634 }, 00:07:28.634 { 00:07:28.634 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:28.634 "dma_device_type": 2 00:07:28.634 }, 00:07:28.634 { 00:07:28.634 "dma_device_id": "system", 00:07:28.634 "dma_device_type": 1 00:07:28.634 }, 00:07:28.634 { 00:07:28.634 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:28.634 "dma_device_type": 2 00:07:28.634 } 00:07:28.634 ], 00:07:28.634 "driver_specific": { 00:07:28.634 "raid": { 00:07:28.634 "uuid": "8c1a35e0-8137-41ec-ad96-5196b37cce9e", 00:07:28.634 "strip_size_kb": 64, 00:07:28.634 "state": "online", 00:07:28.634 "raid_level": "concat", 00:07:28.634 "superblock": true, 00:07:28.634 "num_base_bdevs": 2, 00:07:28.634 "num_base_bdevs_discovered": 2, 00:07:28.634 "num_base_bdevs_operational": 2, 00:07:28.634 "base_bdevs_list": [ 00:07:28.634 { 00:07:28.634 "name": "BaseBdev1", 00:07:28.634 "uuid": "ecd14eb3-0c47-4b28-93b0-ed71c0fd9c48", 00:07:28.634 "is_configured": true, 00:07:28.634 "data_offset": 2048, 00:07:28.634 "data_size": 63488 00:07:28.634 }, 00:07:28.634 { 00:07:28.634 "name": "BaseBdev2", 00:07:28.634 "uuid": "7aa06a2c-24d4-4387-86e6-794560aa1259", 00:07:28.634 "is_configured": true, 00:07:28.634 "data_offset": 2048, 00:07:28.634 "data_size": 63488 00:07:28.634 } 00:07:28.634 ] 00:07:28.634 } 00:07:28.634 } 00:07:28.634 }' 00:07:28.634 13:17:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:28.634 13:17:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:28.634 BaseBdev2' 00:07:28.634 13:17:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:28.894 13:17:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:28.894 13:17:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:28.894 13:17:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:28.894 13:17:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.894 13:17:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:28.894 13:17:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:28.894 13:17:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.894 13:17:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:28.894 13:17:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:28.894 13:17:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:28.894 13:17:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:28.894 13:17:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:28.894 13:17:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.894 13:17:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:28.894 13:17:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.894 13:17:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:28.894 13:17:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:28.894 13:17:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:28.894 13:17:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.894 13:17:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:28.894 [2024-11-17 13:17:17.963094] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:28.894 [2024-11-17 13:17:17.963128] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:28.894 [2024-11-17 13:17:17.963174] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:28.894 13:17:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.894 13:17:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:28.894 13:17:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:07:28.894 13:17:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:28.894 13:17:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:07:28.894 13:17:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:28.894 13:17:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:07:28.894 13:17:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:28.894 13:17:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:28.894 13:17:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:28.894 13:17:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:28.894 13:17:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:28.894 13:17:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:28.894 13:17:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:28.894 13:17:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:28.894 13:17:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:28.894 13:17:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:28.894 13:17:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:28.894 13:17:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.894 13:17:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:28.894 13:17:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.894 13:17:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:28.894 "name": "Existed_Raid", 00:07:28.894 "uuid": "8c1a35e0-8137-41ec-ad96-5196b37cce9e", 00:07:28.894 "strip_size_kb": 64, 00:07:28.894 "state": "offline", 00:07:28.894 "raid_level": "concat", 00:07:28.894 "superblock": true, 00:07:28.894 "num_base_bdevs": 2, 00:07:28.894 "num_base_bdevs_discovered": 1, 00:07:28.894 "num_base_bdevs_operational": 1, 00:07:28.894 "base_bdevs_list": [ 00:07:28.894 { 00:07:28.894 "name": null, 00:07:28.894 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:28.894 "is_configured": false, 00:07:28.894 "data_offset": 0, 00:07:28.894 "data_size": 63488 00:07:28.894 }, 00:07:28.894 { 00:07:28.894 "name": "BaseBdev2", 00:07:28.894 "uuid": "7aa06a2c-24d4-4387-86e6-794560aa1259", 00:07:28.894 "is_configured": true, 00:07:28.894 "data_offset": 2048, 00:07:28.894 "data_size": 63488 00:07:28.894 } 00:07:28.894 ] 00:07:28.894 }' 00:07:28.894 13:17:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:28.894 13:17:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:29.463 13:17:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:29.463 13:17:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:29.463 13:17:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:29.463 13:17:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.463 13:17:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:29.463 13:17:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:29.463 13:17:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.463 13:17:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:29.463 13:17:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:29.463 13:17:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:29.463 13:17:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.463 13:17:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:29.463 [2024-11-17 13:17:18.554983] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:29.463 [2024-11-17 13:17:18.555057] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:29.464 13:17:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.464 13:17:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:29.464 13:17:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:29.464 13:17:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:29.464 13:17:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:29.464 13:17:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.464 13:17:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:29.464 13:17:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.464 13:17:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:29.464 13:17:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:29.464 13:17:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:29.464 13:17:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 61918 00:07:29.464 13:17:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 61918 ']' 00:07:29.464 13:17:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 61918 00:07:29.464 13:17:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:07:29.464 13:17:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:29.723 13:17:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61918 00:07:29.723 killing process with pid 61918 00:07:29.723 13:17:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:29.723 13:17:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:29.723 13:17:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61918' 00:07:29.723 13:17:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 61918 00:07:29.723 [2024-11-17 13:17:18.713617] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:29.723 13:17:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 61918 00:07:29.723 [2024-11-17 13:17:18.730230] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:30.662 13:17:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:07:30.662 00:07:30.662 real 0m4.940s 00:07:30.662 user 0m7.151s 00:07:30.662 sys 0m0.781s 00:07:30.662 13:17:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:30.662 13:17:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:30.662 ************************************ 00:07:30.662 END TEST raid_state_function_test_sb 00:07:30.662 ************************************ 00:07:30.662 13:17:19 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:07:30.662 13:17:19 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:30.662 13:17:19 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:30.662 13:17:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:30.662 ************************************ 00:07:30.662 START TEST raid_superblock_test 00:07:30.662 ************************************ 00:07:30.662 13:17:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 2 00:07:30.662 13:17:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:07:30.662 13:17:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:07:30.662 13:17:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:07:30.662 13:17:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:07:30.662 13:17:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:07:30.662 13:17:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:07:30.662 13:17:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:07:30.662 13:17:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:07:30.662 13:17:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:07:30.662 13:17:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:07:30.662 13:17:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:07:30.662 13:17:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:07:30.662 13:17:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:07:30.662 13:17:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:07:30.662 13:17:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:07:30.662 13:17:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:07:30.662 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:30.662 13:17:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=62166 00:07:30.662 13:17:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 62166 00:07:30.662 13:17:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 62166 ']' 00:07:30.662 13:17:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:30.662 13:17:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:30.662 13:17:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:30.662 13:17:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:30.662 13:17:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.662 13:17:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:07:30.921 [2024-11-17 13:17:19.963286] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:07:30.921 [2024-11-17 13:17:19.963426] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62166 ] 00:07:30.921 [2024-11-17 13:17:20.134077] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.221 [2024-11-17 13:17:20.245212] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.480 [2024-11-17 13:17:20.444210] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:31.480 [2024-11-17 13:17:20.444285] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:31.739 13:17:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:31.740 13:17:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:07:31.740 13:17:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:07:31.740 13:17:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:31.740 13:17:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:07:31.740 13:17:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:07:31.740 13:17:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:07:31.740 13:17:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:31.740 13:17:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:31.740 13:17:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:31.740 13:17:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:07:31.740 13:17:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.740 13:17:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.740 malloc1 00:07:31.740 13:17:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.740 13:17:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:31.740 13:17:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.740 13:17:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.740 [2024-11-17 13:17:20.831754] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:31.740 [2024-11-17 13:17:20.831816] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:31.740 [2024-11-17 13:17:20.831843] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:31.740 [2024-11-17 13:17:20.831853] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:31.740 [2024-11-17 13:17:20.834139] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:31.740 [2024-11-17 13:17:20.834176] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:31.740 pt1 00:07:31.740 13:17:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.740 13:17:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:31.740 13:17:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:31.740 13:17:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:07:31.740 13:17:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:07:31.740 13:17:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:07:31.740 13:17:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:31.740 13:17:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:31.740 13:17:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:31.740 13:17:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:07:31.740 13:17:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.740 13:17:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.740 malloc2 00:07:31.740 13:17:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.740 13:17:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:31.740 13:17:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.740 13:17:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.740 [2024-11-17 13:17:20.885913] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:31.740 [2024-11-17 13:17:20.885964] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:31.740 [2024-11-17 13:17:20.886002] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:07:31.740 [2024-11-17 13:17:20.886010] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:31.740 [2024-11-17 13:17:20.888063] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:31.740 [2024-11-17 13:17:20.888099] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:31.740 pt2 00:07:31.740 13:17:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.740 13:17:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:31.740 13:17:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:31.740 13:17:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:07:31.740 13:17:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.740 13:17:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.740 [2024-11-17 13:17:20.897963] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:31.740 [2024-11-17 13:17:20.899722] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:31.740 [2024-11-17 13:17:20.899875] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:31.740 [2024-11-17 13:17:20.899887] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:31.740 [2024-11-17 13:17:20.900111] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:31.740 [2024-11-17 13:17:20.900282] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:31.740 [2024-11-17 13:17:20.900308] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:07:31.740 [2024-11-17 13:17:20.900485] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:31.740 13:17:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.740 13:17:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:31.740 13:17:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:31.740 13:17:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:31.740 13:17:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:31.740 13:17:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:31.740 13:17:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:31.740 13:17:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:31.740 13:17:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:31.740 13:17:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:31.740 13:17:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:31.740 13:17:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:31.740 13:17:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.740 13:17:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:31.740 13:17:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.740 13:17:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.740 13:17:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:31.740 "name": "raid_bdev1", 00:07:31.740 "uuid": "44aa8a94-5730-4a06-8ad8-d210bc09ee9d", 00:07:31.740 "strip_size_kb": 64, 00:07:31.740 "state": "online", 00:07:31.740 "raid_level": "concat", 00:07:31.740 "superblock": true, 00:07:31.740 "num_base_bdevs": 2, 00:07:31.740 "num_base_bdevs_discovered": 2, 00:07:31.740 "num_base_bdevs_operational": 2, 00:07:31.740 "base_bdevs_list": [ 00:07:31.740 { 00:07:31.740 "name": "pt1", 00:07:31.740 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:31.740 "is_configured": true, 00:07:31.740 "data_offset": 2048, 00:07:31.740 "data_size": 63488 00:07:31.740 }, 00:07:31.740 { 00:07:31.740 "name": "pt2", 00:07:31.740 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:31.740 "is_configured": true, 00:07:31.740 "data_offset": 2048, 00:07:31.740 "data_size": 63488 00:07:31.740 } 00:07:31.740 ] 00:07:31.740 }' 00:07:31.740 13:17:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:31.740 13:17:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.310 13:17:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:07:32.310 13:17:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:32.310 13:17:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:32.310 13:17:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:32.310 13:17:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:32.310 13:17:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:32.310 13:17:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:32.310 13:17:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.310 13:17:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.310 13:17:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:32.310 [2024-11-17 13:17:21.357450] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:32.310 13:17:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.310 13:17:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:32.310 "name": "raid_bdev1", 00:07:32.310 "aliases": [ 00:07:32.310 "44aa8a94-5730-4a06-8ad8-d210bc09ee9d" 00:07:32.310 ], 00:07:32.310 "product_name": "Raid Volume", 00:07:32.310 "block_size": 512, 00:07:32.310 "num_blocks": 126976, 00:07:32.310 "uuid": "44aa8a94-5730-4a06-8ad8-d210bc09ee9d", 00:07:32.310 "assigned_rate_limits": { 00:07:32.310 "rw_ios_per_sec": 0, 00:07:32.310 "rw_mbytes_per_sec": 0, 00:07:32.310 "r_mbytes_per_sec": 0, 00:07:32.310 "w_mbytes_per_sec": 0 00:07:32.310 }, 00:07:32.310 "claimed": false, 00:07:32.310 "zoned": false, 00:07:32.310 "supported_io_types": { 00:07:32.310 "read": true, 00:07:32.310 "write": true, 00:07:32.310 "unmap": true, 00:07:32.310 "flush": true, 00:07:32.310 "reset": true, 00:07:32.310 "nvme_admin": false, 00:07:32.310 "nvme_io": false, 00:07:32.310 "nvme_io_md": false, 00:07:32.310 "write_zeroes": true, 00:07:32.310 "zcopy": false, 00:07:32.310 "get_zone_info": false, 00:07:32.310 "zone_management": false, 00:07:32.310 "zone_append": false, 00:07:32.310 "compare": false, 00:07:32.310 "compare_and_write": false, 00:07:32.310 "abort": false, 00:07:32.310 "seek_hole": false, 00:07:32.310 "seek_data": false, 00:07:32.310 "copy": false, 00:07:32.310 "nvme_iov_md": false 00:07:32.310 }, 00:07:32.310 "memory_domains": [ 00:07:32.310 { 00:07:32.310 "dma_device_id": "system", 00:07:32.310 "dma_device_type": 1 00:07:32.310 }, 00:07:32.310 { 00:07:32.310 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:32.310 "dma_device_type": 2 00:07:32.310 }, 00:07:32.310 { 00:07:32.310 "dma_device_id": "system", 00:07:32.310 "dma_device_type": 1 00:07:32.310 }, 00:07:32.310 { 00:07:32.310 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:32.310 "dma_device_type": 2 00:07:32.310 } 00:07:32.310 ], 00:07:32.310 "driver_specific": { 00:07:32.310 "raid": { 00:07:32.310 "uuid": "44aa8a94-5730-4a06-8ad8-d210bc09ee9d", 00:07:32.310 "strip_size_kb": 64, 00:07:32.310 "state": "online", 00:07:32.310 "raid_level": "concat", 00:07:32.310 "superblock": true, 00:07:32.310 "num_base_bdevs": 2, 00:07:32.310 "num_base_bdevs_discovered": 2, 00:07:32.310 "num_base_bdevs_operational": 2, 00:07:32.310 "base_bdevs_list": [ 00:07:32.310 { 00:07:32.310 "name": "pt1", 00:07:32.310 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:32.310 "is_configured": true, 00:07:32.310 "data_offset": 2048, 00:07:32.310 "data_size": 63488 00:07:32.310 }, 00:07:32.310 { 00:07:32.310 "name": "pt2", 00:07:32.310 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:32.310 "is_configured": true, 00:07:32.310 "data_offset": 2048, 00:07:32.310 "data_size": 63488 00:07:32.310 } 00:07:32.310 ] 00:07:32.310 } 00:07:32.310 } 00:07:32.310 }' 00:07:32.310 13:17:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:32.310 13:17:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:32.311 pt2' 00:07:32.311 13:17:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:32.311 13:17:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:32.311 13:17:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:32.311 13:17:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:32.311 13:17:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:32.311 13:17:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.311 13:17:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.311 13:17:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.311 13:17:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:32.311 13:17:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:32.311 13:17:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:32.311 13:17:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:32.311 13:17:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:32.311 13:17:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.311 13:17:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.311 13:17:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.571 13:17:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:32.571 13:17:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:32.571 13:17:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:32.571 13:17:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:07:32.571 13:17:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.571 13:17:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.571 [2024-11-17 13:17:21.549038] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:32.571 13:17:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.571 13:17:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=44aa8a94-5730-4a06-8ad8-d210bc09ee9d 00:07:32.571 13:17:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 44aa8a94-5730-4a06-8ad8-d210bc09ee9d ']' 00:07:32.571 13:17:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:32.571 13:17:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.571 13:17:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.571 [2024-11-17 13:17:21.592689] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:32.571 [2024-11-17 13:17:21.592714] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:32.571 [2024-11-17 13:17:21.592804] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:32.571 [2024-11-17 13:17:21.592851] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:32.571 [2024-11-17 13:17:21.592863] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:07:32.571 13:17:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.571 13:17:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:32.571 13:17:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.571 13:17:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.571 13:17:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:07:32.571 13:17:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.571 13:17:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:07:32.571 13:17:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:07:32.571 13:17:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:32.571 13:17:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:07:32.571 13:17:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.571 13:17:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.571 13:17:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.571 13:17:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:32.571 13:17:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:07:32.571 13:17:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.571 13:17:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.571 13:17:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.571 13:17:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:07:32.571 13:17:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.571 13:17:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:07:32.571 13:17:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.571 13:17:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.571 13:17:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:07:32.571 13:17:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:32.571 13:17:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:07:32.571 13:17:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:32.571 13:17:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:07:32.571 13:17:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:32.571 13:17:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:07:32.571 13:17:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:32.571 13:17:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:32.571 13:17:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.571 13:17:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.571 [2024-11-17 13:17:21.728557] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:32.571 [2024-11-17 13:17:21.730551] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:32.571 [2024-11-17 13:17:21.730642] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:07:32.571 [2024-11-17 13:17:21.730695] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:07:32.571 [2024-11-17 13:17:21.730711] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:32.571 [2024-11-17 13:17:21.730720] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:07:32.571 request: 00:07:32.571 { 00:07:32.571 "name": "raid_bdev1", 00:07:32.571 "raid_level": "concat", 00:07:32.571 "base_bdevs": [ 00:07:32.571 "malloc1", 00:07:32.571 "malloc2" 00:07:32.571 ], 00:07:32.571 "strip_size_kb": 64, 00:07:32.571 "superblock": false, 00:07:32.571 "method": "bdev_raid_create", 00:07:32.571 "req_id": 1 00:07:32.571 } 00:07:32.571 Got JSON-RPC error response 00:07:32.571 response: 00:07:32.571 { 00:07:32.571 "code": -17, 00:07:32.571 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:07:32.571 } 00:07:32.571 13:17:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:32.571 13:17:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:07:32.571 13:17:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:32.571 13:17:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:32.572 13:17:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:32.572 13:17:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:07:32.572 13:17:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:32.572 13:17:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.572 13:17:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.572 13:17:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.572 13:17:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:07:32.572 13:17:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:07:32.572 13:17:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:32.572 13:17:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.572 13:17:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.572 [2024-11-17 13:17:21.784441] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:32.572 [2024-11-17 13:17:21.784528] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:32.572 [2024-11-17 13:17:21.784552] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:07:32.572 [2024-11-17 13:17:21.784563] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:32.572 [2024-11-17 13:17:21.786808] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:32.572 [2024-11-17 13:17:21.786849] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:32.572 [2024-11-17 13:17:21.786931] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:32.572 [2024-11-17 13:17:21.786990] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:32.572 pt1 00:07:32.572 13:17:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.572 13:17:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:07:32.572 13:17:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:32.572 13:17:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:32.572 13:17:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:32.572 13:17:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:32.572 13:17:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:32.572 13:17:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:32.572 13:17:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:32.572 13:17:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:32.572 13:17:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:32.831 13:17:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:32.831 13:17:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.831 13:17:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.831 13:17:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:32.831 13:17:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.831 13:17:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:32.831 "name": "raid_bdev1", 00:07:32.831 "uuid": "44aa8a94-5730-4a06-8ad8-d210bc09ee9d", 00:07:32.831 "strip_size_kb": 64, 00:07:32.831 "state": "configuring", 00:07:32.831 "raid_level": "concat", 00:07:32.831 "superblock": true, 00:07:32.831 "num_base_bdevs": 2, 00:07:32.831 "num_base_bdevs_discovered": 1, 00:07:32.831 "num_base_bdevs_operational": 2, 00:07:32.831 "base_bdevs_list": [ 00:07:32.831 { 00:07:32.831 "name": "pt1", 00:07:32.831 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:32.831 "is_configured": true, 00:07:32.831 "data_offset": 2048, 00:07:32.831 "data_size": 63488 00:07:32.831 }, 00:07:32.831 { 00:07:32.831 "name": null, 00:07:32.831 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:32.831 "is_configured": false, 00:07:32.831 "data_offset": 2048, 00:07:32.831 "data_size": 63488 00:07:32.831 } 00:07:32.831 ] 00:07:32.831 }' 00:07:32.831 13:17:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:32.831 13:17:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.092 13:17:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:07:33.092 13:17:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:07:33.092 13:17:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:33.092 13:17:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:33.092 13:17:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.092 13:17:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.092 [2024-11-17 13:17:22.215704] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:33.092 [2024-11-17 13:17:22.215778] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:33.092 [2024-11-17 13:17:22.215801] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:07:33.092 [2024-11-17 13:17:22.215811] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:33.092 [2024-11-17 13:17:22.216334] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:33.092 [2024-11-17 13:17:22.216365] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:33.092 [2024-11-17 13:17:22.216456] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:33.092 [2024-11-17 13:17:22.216484] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:33.092 [2024-11-17 13:17:22.216650] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:33.092 [2024-11-17 13:17:22.216671] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:33.092 [2024-11-17 13:17:22.216918] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:07:33.092 [2024-11-17 13:17:22.217087] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:33.092 [2024-11-17 13:17:22.217104] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:33.092 [2024-11-17 13:17:22.217267] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:33.092 pt2 00:07:33.092 13:17:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.092 13:17:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:07:33.092 13:17:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:33.092 13:17:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:33.092 13:17:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:33.092 13:17:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:33.092 13:17:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:33.092 13:17:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:33.092 13:17:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:33.092 13:17:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:33.092 13:17:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:33.092 13:17:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:33.092 13:17:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:33.092 13:17:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:33.092 13:17:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.092 13:17:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.092 13:17:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:33.092 13:17:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.092 13:17:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:33.092 "name": "raid_bdev1", 00:07:33.092 "uuid": "44aa8a94-5730-4a06-8ad8-d210bc09ee9d", 00:07:33.092 "strip_size_kb": 64, 00:07:33.092 "state": "online", 00:07:33.092 "raid_level": "concat", 00:07:33.092 "superblock": true, 00:07:33.092 "num_base_bdevs": 2, 00:07:33.092 "num_base_bdevs_discovered": 2, 00:07:33.092 "num_base_bdevs_operational": 2, 00:07:33.092 "base_bdevs_list": [ 00:07:33.092 { 00:07:33.092 "name": "pt1", 00:07:33.092 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:33.092 "is_configured": true, 00:07:33.092 "data_offset": 2048, 00:07:33.092 "data_size": 63488 00:07:33.092 }, 00:07:33.092 { 00:07:33.092 "name": "pt2", 00:07:33.092 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:33.092 "is_configured": true, 00:07:33.092 "data_offset": 2048, 00:07:33.092 "data_size": 63488 00:07:33.092 } 00:07:33.092 ] 00:07:33.092 }' 00:07:33.092 13:17:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:33.092 13:17:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.663 13:17:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:07:33.663 13:17:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:33.663 13:17:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:33.663 13:17:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:33.663 13:17:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:33.663 13:17:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:33.663 13:17:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:33.663 13:17:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:33.663 13:17:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.663 13:17:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.663 [2024-11-17 13:17:22.647236] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:33.663 13:17:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.663 13:17:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:33.663 "name": "raid_bdev1", 00:07:33.663 "aliases": [ 00:07:33.663 "44aa8a94-5730-4a06-8ad8-d210bc09ee9d" 00:07:33.663 ], 00:07:33.663 "product_name": "Raid Volume", 00:07:33.663 "block_size": 512, 00:07:33.663 "num_blocks": 126976, 00:07:33.663 "uuid": "44aa8a94-5730-4a06-8ad8-d210bc09ee9d", 00:07:33.663 "assigned_rate_limits": { 00:07:33.663 "rw_ios_per_sec": 0, 00:07:33.663 "rw_mbytes_per_sec": 0, 00:07:33.663 "r_mbytes_per_sec": 0, 00:07:33.663 "w_mbytes_per_sec": 0 00:07:33.663 }, 00:07:33.663 "claimed": false, 00:07:33.663 "zoned": false, 00:07:33.663 "supported_io_types": { 00:07:33.663 "read": true, 00:07:33.663 "write": true, 00:07:33.663 "unmap": true, 00:07:33.663 "flush": true, 00:07:33.663 "reset": true, 00:07:33.663 "nvme_admin": false, 00:07:33.663 "nvme_io": false, 00:07:33.663 "nvme_io_md": false, 00:07:33.663 "write_zeroes": true, 00:07:33.663 "zcopy": false, 00:07:33.663 "get_zone_info": false, 00:07:33.663 "zone_management": false, 00:07:33.663 "zone_append": false, 00:07:33.663 "compare": false, 00:07:33.663 "compare_and_write": false, 00:07:33.663 "abort": false, 00:07:33.663 "seek_hole": false, 00:07:33.663 "seek_data": false, 00:07:33.663 "copy": false, 00:07:33.663 "nvme_iov_md": false 00:07:33.663 }, 00:07:33.663 "memory_domains": [ 00:07:33.663 { 00:07:33.663 "dma_device_id": "system", 00:07:33.663 "dma_device_type": 1 00:07:33.663 }, 00:07:33.663 { 00:07:33.663 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:33.663 "dma_device_type": 2 00:07:33.663 }, 00:07:33.663 { 00:07:33.663 "dma_device_id": "system", 00:07:33.663 "dma_device_type": 1 00:07:33.663 }, 00:07:33.663 { 00:07:33.663 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:33.663 "dma_device_type": 2 00:07:33.663 } 00:07:33.663 ], 00:07:33.663 "driver_specific": { 00:07:33.663 "raid": { 00:07:33.663 "uuid": "44aa8a94-5730-4a06-8ad8-d210bc09ee9d", 00:07:33.663 "strip_size_kb": 64, 00:07:33.663 "state": "online", 00:07:33.663 "raid_level": "concat", 00:07:33.663 "superblock": true, 00:07:33.663 "num_base_bdevs": 2, 00:07:33.663 "num_base_bdevs_discovered": 2, 00:07:33.663 "num_base_bdevs_operational": 2, 00:07:33.663 "base_bdevs_list": [ 00:07:33.663 { 00:07:33.663 "name": "pt1", 00:07:33.663 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:33.663 "is_configured": true, 00:07:33.663 "data_offset": 2048, 00:07:33.663 "data_size": 63488 00:07:33.663 }, 00:07:33.663 { 00:07:33.663 "name": "pt2", 00:07:33.664 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:33.664 "is_configured": true, 00:07:33.664 "data_offset": 2048, 00:07:33.664 "data_size": 63488 00:07:33.664 } 00:07:33.664 ] 00:07:33.664 } 00:07:33.664 } 00:07:33.664 }' 00:07:33.664 13:17:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:33.664 13:17:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:33.664 pt2' 00:07:33.664 13:17:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:33.664 13:17:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:33.664 13:17:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:33.664 13:17:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:33.664 13:17:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.664 13:17:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.664 13:17:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:33.664 13:17:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.664 13:17:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:33.664 13:17:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:33.664 13:17:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:33.664 13:17:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:33.664 13:17:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.664 13:17:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.664 13:17:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:33.664 13:17:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.664 13:17:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:33.664 13:17:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:33.664 13:17:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:07:33.664 13:17:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:33.664 13:17:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.664 13:17:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.664 [2024-11-17 13:17:22.862842] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:33.664 13:17:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.924 13:17:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 44aa8a94-5730-4a06-8ad8-d210bc09ee9d '!=' 44aa8a94-5730-4a06-8ad8-d210bc09ee9d ']' 00:07:33.924 13:17:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:07:33.924 13:17:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:33.924 13:17:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:33.924 13:17:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 62166 00:07:33.925 13:17:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 62166 ']' 00:07:33.925 13:17:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 62166 00:07:33.925 13:17:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:07:33.925 13:17:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:33.925 13:17:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62166 00:07:33.925 13:17:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:33.925 13:17:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:33.925 killing process with pid 62166 00:07:33.925 13:17:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62166' 00:07:33.925 13:17:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 62166 00:07:33.925 [2024-11-17 13:17:22.936986] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:33.925 [2024-11-17 13:17:22.937102] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:33.925 13:17:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 62166 00:07:33.925 [2024-11-17 13:17:22.937165] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:33.925 [2024-11-17 13:17:22.937179] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:33.925 [2024-11-17 13:17:23.140265] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:35.304 13:17:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:07:35.304 00:07:35.304 real 0m4.354s 00:07:35.304 user 0m6.099s 00:07:35.304 sys 0m0.705s 00:07:35.304 13:17:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:35.304 13:17:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.304 ************************************ 00:07:35.304 END TEST raid_superblock_test 00:07:35.304 ************************************ 00:07:35.304 13:17:24 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 2 read 00:07:35.304 13:17:24 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:35.304 13:17:24 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:35.304 13:17:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:35.304 ************************************ 00:07:35.305 START TEST raid_read_error_test 00:07:35.305 ************************************ 00:07:35.305 13:17:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 2 read 00:07:35.305 13:17:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:07:35.305 13:17:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:35.305 13:17:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:07:35.305 13:17:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:35.305 13:17:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:35.305 13:17:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:35.305 13:17:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:35.305 13:17:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:35.305 13:17:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:35.305 13:17:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:35.305 13:17:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:35.305 13:17:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:35.305 13:17:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:35.305 13:17:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:35.305 13:17:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:35.305 13:17:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:35.305 13:17:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:35.305 13:17:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:35.305 13:17:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:07:35.305 13:17:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:35.305 13:17:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:35.305 13:17:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:35.305 13:17:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.9BQMnDKFuB 00:07:35.305 13:17:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62372 00:07:35.305 13:17:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62372 00:07:35.305 13:17:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:35.305 13:17:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 62372 ']' 00:07:35.305 13:17:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:35.305 13:17:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:35.305 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:35.305 13:17:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:35.305 13:17:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:35.305 13:17:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.305 [2024-11-17 13:17:24.405717] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:07:35.305 [2024-11-17 13:17:24.405849] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62372 ] 00:07:35.563 [2024-11-17 13:17:24.578139] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.563 [2024-11-17 13:17:24.687917] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.822 [2024-11-17 13:17:24.890129] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:35.822 [2024-11-17 13:17:24.890194] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:36.082 13:17:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:36.082 13:17:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:07:36.082 13:17:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:36.082 13:17:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:36.082 13:17:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.082 13:17:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.082 BaseBdev1_malloc 00:07:36.082 13:17:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.082 13:17:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:36.082 13:17:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.082 13:17:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.082 true 00:07:36.082 13:17:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.082 13:17:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:36.082 13:17:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.082 13:17:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.082 [2024-11-17 13:17:25.291175] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:36.082 [2024-11-17 13:17:25.291240] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:36.082 [2024-11-17 13:17:25.291261] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:36.082 [2024-11-17 13:17:25.291271] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:36.082 [2024-11-17 13:17:25.293339] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:36.082 [2024-11-17 13:17:25.293380] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:36.082 BaseBdev1 00:07:36.082 13:17:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.082 13:17:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:36.082 13:17:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:36.082 13:17:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.082 13:17:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.344 BaseBdev2_malloc 00:07:36.344 13:17:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.344 13:17:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:36.344 13:17:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.344 13:17:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.344 true 00:07:36.344 13:17:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.344 13:17:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:36.344 13:17:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.344 13:17:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.344 [2024-11-17 13:17:25.359559] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:36.344 [2024-11-17 13:17:25.359613] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:36.344 [2024-11-17 13:17:25.359632] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:36.344 [2024-11-17 13:17:25.359642] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:36.344 [2024-11-17 13:17:25.361823] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:36.344 [2024-11-17 13:17:25.361863] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:36.344 BaseBdev2 00:07:36.344 13:17:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.344 13:17:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:36.344 13:17:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.344 13:17:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.344 [2024-11-17 13:17:25.371590] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:36.344 [2024-11-17 13:17:25.373475] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:36.344 [2024-11-17 13:17:25.373674] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:36.344 [2024-11-17 13:17:25.373690] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:36.344 [2024-11-17 13:17:25.373926] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:07:36.344 [2024-11-17 13:17:25.374131] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:36.344 [2024-11-17 13:17:25.374153] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:36.344 [2024-11-17 13:17:25.374342] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:36.344 13:17:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.344 13:17:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:36.344 13:17:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:36.344 13:17:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:36.344 13:17:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:36.344 13:17:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:36.344 13:17:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:36.344 13:17:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:36.344 13:17:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:36.344 13:17:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:36.344 13:17:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:36.344 13:17:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:36.344 13:17:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.344 13:17:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:36.344 13:17:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.344 13:17:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.344 13:17:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:36.344 "name": "raid_bdev1", 00:07:36.344 "uuid": "a3ea999e-b30f-490b-bc82-fd5c09540b74", 00:07:36.344 "strip_size_kb": 64, 00:07:36.344 "state": "online", 00:07:36.344 "raid_level": "concat", 00:07:36.344 "superblock": true, 00:07:36.344 "num_base_bdevs": 2, 00:07:36.344 "num_base_bdevs_discovered": 2, 00:07:36.344 "num_base_bdevs_operational": 2, 00:07:36.344 "base_bdevs_list": [ 00:07:36.344 { 00:07:36.344 "name": "BaseBdev1", 00:07:36.344 "uuid": "0ad2c07f-c828-551e-bccc-b1f52a1dd506", 00:07:36.344 "is_configured": true, 00:07:36.344 "data_offset": 2048, 00:07:36.344 "data_size": 63488 00:07:36.344 }, 00:07:36.344 { 00:07:36.344 "name": "BaseBdev2", 00:07:36.344 "uuid": "9cf25968-89b2-5b70-ac26-ba1a91c1a84a", 00:07:36.344 "is_configured": true, 00:07:36.344 "data_offset": 2048, 00:07:36.344 "data_size": 63488 00:07:36.344 } 00:07:36.344 ] 00:07:36.344 }' 00:07:36.344 13:17:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:36.344 13:17:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.912 13:17:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:36.912 13:17:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:36.913 [2024-11-17 13:17:25.919872] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:07:37.851 13:17:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:07:37.851 13:17:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.851 13:17:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.851 13:17:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.851 13:17:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:37.851 13:17:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:07:37.851 13:17:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:37.851 13:17:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:37.851 13:17:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:37.851 13:17:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:37.851 13:17:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:37.851 13:17:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:37.851 13:17:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:37.851 13:17:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:37.851 13:17:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:37.851 13:17:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:37.851 13:17:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:37.851 13:17:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:37.851 13:17:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:37.851 13:17:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.851 13:17:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.851 13:17:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.851 13:17:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:37.851 "name": "raid_bdev1", 00:07:37.851 "uuid": "a3ea999e-b30f-490b-bc82-fd5c09540b74", 00:07:37.851 "strip_size_kb": 64, 00:07:37.851 "state": "online", 00:07:37.851 "raid_level": "concat", 00:07:37.851 "superblock": true, 00:07:37.851 "num_base_bdevs": 2, 00:07:37.851 "num_base_bdevs_discovered": 2, 00:07:37.851 "num_base_bdevs_operational": 2, 00:07:37.851 "base_bdevs_list": [ 00:07:37.851 { 00:07:37.851 "name": "BaseBdev1", 00:07:37.851 "uuid": "0ad2c07f-c828-551e-bccc-b1f52a1dd506", 00:07:37.851 "is_configured": true, 00:07:37.851 "data_offset": 2048, 00:07:37.851 "data_size": 63488 00:07:37.851 }, 00:07:37.851 { 00:07:37.851 "name": "BaseBdev2", 00:07:37.851 "uuid": "9cf25968-89b2-5b70-ac26-ba1a91c1a84a", 00:07:37.851 "is_configured": true, 00:07:37.851 "data_offset": 2048, 00:07:37.851 "data_size": 63488 00:07:37.851 } 00:07:37.851 ] 00:07:37.851 }' 00:07:37.851 13:17:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:37.851 13:17:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.110 13:17:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:38.110 13:17:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.110 13:17:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.110 [2024-11-17 13:17:27.298170] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:38.110 [2024-11-17 13:17:27.298223] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:38.110 [2024-11-17 13:17:27.300832] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:38.110 [2024-11-17 13:17:27.300883] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:38.110 [2024-11-17 13:17:27.300916] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:38.110 [2024-11-17 13:17:27.300930] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:38.110 { 00:07:38.110 "results": [ 00:07:38.110 { 00:07:38.110 "job": "raid_bdev1", 00:07:38.110 "core_mask": "0x1", 00:07:38.110 "workload": "randrw", 00:07:38.110 "percentage": 50, 00:07:38.110 "status": "finished", 00:07:38.110 "queue_depth": 1, 00:07:38.110 "io_size": 131072, 00:07:38.110 "runtime": 1.379061, 00:07:38.110 "iops": 16783.88410664938, 00:07:38.110 "mibps": 2097.9855133311726, 00:07:38.110 "io_failed": 1, 00:07:38.110 "io_timeout": 0, 00:07:38.110 "avg_latency_us": 82.66193961019593, 00:07:38.110 "min_latency_us": 24.258515283842794, 00:07:38.110 "max_latency_us": 1416.6078602620087 00:07:38.110 } 00:07:38.110 ], 00:07:38.110 "core_count": 1 00:07:38.110 } 00:07:38.110 13:17:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.110 13:17:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62372 00:07:38.110 13:17:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 62372 ']' 00:07:38.110 13:17:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 62372 00:07:38.110 13:17:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:07:38.110 13:17:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:38.110 13:17:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62372 00:07:38.369 13:17:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:38.369 13:17:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:38.369 killing process with pid 62372 00:07:38.369 13:17:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62372' 00:07:38.369 13:17:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 62372 00:07:38.369 [2024-11-17 13:17:27.349522] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:38.369 13:17:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 62372 00:07:38.369 [2024-11-17 13:17:27.477712] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:39.750 13:17:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.9BQMnDKFuB 00:07:39.750 13:17:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:39.750 13:17:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:39.750 13:17:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:07:39.750 13:17:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:07:39.750 13:17:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:39.750 13:17:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:39.750 13:17:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:07:39.750 00:07:39.750 real 0m4.310s 00:07:39.750 user 0m5.182s 00:07:39.750 sys 0m0.539s 00:07:39.750 13:17:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:39.750 13:17:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.750 ************************************ 00:07:39.750 END TEST raid_read_error_test 00:07:39.750 ************************************ 00:07:39.750 13:17:28 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 2 write 00:07:39.750 13:17:28 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:39.750 13:17:28 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:39.750 13:17:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:39.750 ************************************ 00:07:39.750 START TEST raid_write_error_test 00:07:39.750 ************************************ 00:07:39.750 13:17:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 2 write 00:07:39.750 13:17:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:07:39.750 13:17:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:39.750 13:17:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:07:39.750 13:17:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:39.750 13:17:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:39.750 13:17:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:39.750 13:17:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:39.750 13:17:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:39.750 13:17:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:39.750 13:17:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:39.750 13:17:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:39.750 13:17:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:39.750 13:17:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:39.750 13:17:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:39.750 13:17:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:39.750 13:17:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:39.750 13:17:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:39.750 13:17:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:39.750 13:17:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:07:39.750 13:17:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:39.750 13:17:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:39.750 13:17:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:39.750 13:17:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.OflAHQ1vcA 00:07:39.750 13:17:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62512 00:07:39.750 13:17:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62512 00:07:39.750 13:17:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 62512 ']' 00:07:39.750 13:17:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:39.750 13:17:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:39.750 13:17:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:39.750 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:39.750 13:17:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:39.750 13:17:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:39.750 13:17:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.750 [2024-11-17 13:17:28.778270] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:07:39.750 [2024-11-17 13:17:28.778392] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62512 ] 00:07:39.750 [2024-11-17 13:17:28.936191] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.009 [2024-11-17 13:17:29.046984] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.267 [2024-11-17 13:17:29.242020] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:40.267 [2024-11-17 13:17:29.242093] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:40.527 13:17:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:40.527 13:17:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:07:40.527 13:17:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:40.528 13:17:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:40.528 13:17:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.528 13:17:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.528 BaseBdev1_malloc 00:07:40.528 13:17:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.528 13:17:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:40.528 13:17:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.528 13:17:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.528 true 00:07:40.528 13:17:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.528 13:17:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:40.528 13:17:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.528 13:17:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.528 [2024-11-17 13:17:29.652359] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:40.528 [2024-11-17 13:17:29.652434] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:40.528 [2024-11-17 13:17:29.652471] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:40.528 [2024-11-17 13:17:29.652485] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:40.528 [2024-11-17 13:17:29.654732] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:40.528 [2024-11-17 13:17:29.654774] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:40.528 BaseBdev1 00:07:40.528 13:17:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.528 13:17:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:40.528 13:17:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:40.528 13:17:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.528 13:17:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.528 BaseBdev2_malloc 00:07:40.528 13:17:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.528 13:17:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:40.528 13:17:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.528 13:17:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.528 true 00:07:40.528 13:17:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.528 13:17:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:40.528 13:17:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.528 13:17:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.528 [2024-11-17 13:17:29.719129] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:40.528 [2024-11-17 13:17:29.719188] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:40.528 [2024-11-17 13:17:29.719204] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:40.528 [2024-11-17 13:17:29.719241] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:40.528 [2024-11-17 13:17:29.721330] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:40.528 [2024-11-17 13:17:29.721369] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:40.528 BaseBdev2 00:07:40.528 13:17:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.528 13:17:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:40.528 13:17:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.528 13:17:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.528 [2024-11-17 13:17:29.731171] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:40.528 [2024-11-17 13:17:29.732974] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:40.528 [2024-11-17 13:17:29.733190] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:40.528 [2024-11-17 13:17:29.733206] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:40.528 [2024-11-17 13:17:29.733468] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:07:40.528 [2024-11-17 13:17:29.733673] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:40.528 [2024-11-17 13:17:29.733695] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:40.528 [2024-11-17 13:17:29.733875] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:40.528 13:17:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.528 13:17:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:40.528 13:17:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:40.528 13:17:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:40.528 13:17:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:40.528 13:17:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:40.528 13:17:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:40.528 13:17:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:40.528 13:17:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:40.528 13:17:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:40.528 13:17:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:40.528 13:17:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:40.528 13:17:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:40.528 13:17:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.528 13:17:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.788 13:17:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.788 13:17:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:40.788 "name": "raid_bdev1", 00:07:40.789 "uuid": "2aa88a0b-5244-4820-859b-12d387265bbf", 00:07:40.789 "strip_size_kb": 64, 00:07:40.789 "state": "online", 00:07:40.789 "raid_level": "concat", 00:07:40.789 "superblock": true, 00:07:40.789 "num_base_bdevs": 2, 00:07:40.789 "num_base_bdevs_discovered": 2, 00:07:40.789 "num_base_bdevs_operational": 2, 00:07:40.789 "base_bdevs_list": [ 00:07:40.789 { 00:07:40.789 "name": "BaseBdev1", 00:07:40.789 "uuid": "ce1b04fe-6020-563b-a195-438010d33df4", 00:07:40.789 "is_configured": true, 00:07:40.789 "data_offset": 2048, 00:07:40.789 "data_size": 63488 00:07:40.789 }, 00:07:40.789 { 00:07:40.789 "name": "BaseBdev2", 00:07:40.789 "uuid": "0ad290fc-9eef-5155-8838-3586b0e1c3ec", 00:07:40.789 "is_configured": true, 00:07:40.789 "data_offset": 2048, 00:07:40.789 "data_size": 63488 00:07:40.789 } 00:07:40.789 ] 00:07:40.789 }' 00:07:40.789 13:17:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:40.789 13:17:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.056 13:17:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:41.056 13:17:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:41.056 [2024-11-17 13:17:30.219467] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:07:42.003 13:17:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:07:42.003 13:17:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:42.003 13:17:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.003 13:17:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:42.003 13:17:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:42.003 13:17:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:07:42.003 13:17:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:42.004 13:17:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:42.004 13:17:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:42.004 13:17:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:42.004 13:17:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:42.004 13:17:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:42.004 13:17:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:42.004 13:17:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:42.004 13:17:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:42.004 13:17:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:42.004 13:17:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:42.004 13:17:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:42.004 13:17:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:42.004 13:17:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:42.004 13:17:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.004 13:17:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:42.004 13:17:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:42.004 "name": "raid_bdev1", 00:07:42.004 "uuid": "2aa88a0b-5244-4820-859b-12d387265bbf", 00:07:42.004 "strip_size_kb": 64, 00:07:42.004 "state": "online", 00:07:42.004 "raid_level": "concat", 00:07:42.004 "superblock": true, 00:07:42.004 "num_base_bdevs": 2, 00:07:42.004 "num_base_bdevs_discovered": 2, 00:07:42.004 "num_base_bdevs_operational": 2, 00:07:42.004 "base_bdevs_list": [ 00:07:42.004 { 00:07:42.004 "name": "BaseBdev1", 00:07:42.004 "uuid": "ce1b04fe-6020-563b-a195-438010d33df4", 00:07:42.004 "is_configured": true, 00:07:42.004 "data_offset": 2048, 00:07:42.004 "data_size": 63488 00:07:42.004 }, 00:07:42.004 { 00:07:42.004 "name": "BaseBdev2", 00:07:42.004 "uuid": "0ad290fc-9eef-5155-8838-3586b0e1c3ec", 00:07:42.004 "is_configured": true, 00:07:42.004 "data_offset": 2048, 00:07:42.004 "data_size": 63488 00:07:42.004 } 00:07:42.004 ] 00:07:42.004 }' 00:07:42.004 13:17:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:42.004 13:17:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.572 13:17:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:42.572 13:17:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:42.572 13:17:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.572 [2024-11-17 13:17:31.593991] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:42.572 [2024-11-17 13:17:31.594034] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:42.572 [2024-11-17 13:17:31.596697] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:42.572 [2024-11-17 13:17:31.596763] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:42.572 [2024-11-17 13:17:31.596796] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:42.572 [2024-11-17 13:17:31.596811] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:42.572 { 00:07:42.572 "results": [ 00:07:42.572 { 00:07:42.572 "job": "raid_bdev1", 00:07:42.572 "core_mask": "0x1", 00:07:42.572 "workload": "randrw", 00:07:42.572 "percentage": 50, 00:07:42.572 "status": "finished", 00:07:42.572 "queue_depth": 1, 00:07:42.572 "io_size": 131072, 00:07:42.572 "runtime": 1.375308, 00:07:42.572 "iops": 16447.95202238335, 00:07:42.572 "mibps": 2055.9940027979187, 00:07:42.572 "io_failed": 1, 00:07:42.572 "io_timeout": 0, 00:07:42.572 "avg_latency_us": 84.36615436764228, 00:07:42.572 "min_latency_us": 24.929257641921396, 00:07:42.572 "max_latency_us": 1574.0087336244542 00:07:42.573 } 00:07:42.573 ], 00:07:42.573 "core_count": 1 00:07:42.573 } 00:07:42.573 13:17:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:42.573 13:17:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62512 00:07:42.573 13:17:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 62512 ']' 00:07:42.573 13:17:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 62512 00:07:42.573 13:17:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:07:42.573 13:17:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:42.573 13:17:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62512 00:07:42.573 13:17:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:42.573 13:17:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:42.573 killing process with pid 62512 00:07:42.573 13:17:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62512' 00:07:42.573 13:17:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 62512 00:07:42.573 [2024-11-17 13:17:31.644618] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:42.573 13:17:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 62512 00:07:42.573 [2024-11-17 13:17:31.775982] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:43.955 13:17:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:43.955 13:17:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.OflAHQ1vcA 00:07:43.955 13:17:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:43.955 13:17:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:07:43.955 13:17:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:07:43.955 13:17:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:43.955 13:17:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:43.955 13:17:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:07:43.955 00:07:43.955 real 0m4.258s 00:07:43.955 user 0m5.091s 00:07:43.955 sys 0m0.515s 00:07:43.955 13:17:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:43.955 13:17:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.955 ************************************ 00:07:43.955 END TEST raid_write_error_test 00:07:43.955 ************************************ 00:07:43.955 13:17:32 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:07:43.955 13:17:32 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:07:43.955 13:17:32 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:43.955 13:17:32 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:43.955 13:17:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:43.955 ************************************ 00:07:43.955 START TEST raid_state_function_test 00:07:43.955 ************************************ 00:07:43.955 13:17:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 false 00:07:43.955 13:17:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:07:43.955 13:17:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:43.955 13:17:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:07:43.955 13:17:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:43.955 13:17:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:43.955 13:17:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:43.955 13:17:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:43.955 13:17:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:43.955 13:17:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:43.955 13:17:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:43.955 13:17:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:43.955 13:17:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:43.955 13:17:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:43.955 13:17:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:43.955 13:17:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:43.955 13:17:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:43.955 13:17:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:43.955 13:17:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:43.955 13:17:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:07:43.955 13:17:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:07:43.955 13:17:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:07:43.955 13:17:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:07:43.955 13:17:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=62650 00:07:43.955 13:17:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:43.955 13:17:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62650' 00:07:43.955 Process raid pid: 62650 00:07:43.955 13:17:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 62650 00:07:43.955 13:17:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 62650 ']' 00:07:43.955 13:17:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:43.955 13:17:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:43.955 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:43.955 13:17:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:43.955 13:17:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:43.955 13:17:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.955 [2024-11-17 13:17:33.099714] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:07:43.955 [2024-11-17 13:17:33.099837] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:44.215 [2024-11-17 13:17:33.276386] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.215 [2024-11-17 13:17:33.393270] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.475 [2024-11-17 13:17:33.600247] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:44.475 [2024-11-17 13:17:33.600290] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:44.736 13:17:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:44.736 13:17:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:07:44.736 13:17:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:44.736 13:17:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.736 13:17:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.736 [2024-11-17 13:17:33.954664] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:44.736 [2024-11-17 13:17:33.954718] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:44.736 [2024-11-17 13:17:33.954728] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:44.736 [2024-11-17 13:17:33.954738] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:44.996 13:17:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.996 13:17:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:44.996 13:17:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:44.996 13:17:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:44.996 13:17:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:44.996 13:17:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:44.996 13:17:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:44.996 13:17:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:44.996 13:17:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:44.996 13:17:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:44.996 13:17:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:44.996 13:17:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:44.996 13:17:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.996 13:17:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.996 13:17:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:44.996 13:17:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.996 13:17:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:44.996 "name": "Existed_Raid", 00:07:44.996 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:44.996 "strip_size_kb": 0, 00:07:44.996 "state": "configuring", 00:07:44.996 "raid_level": "raid1", 00:07:44.996 "superblock": false, 00:07:44.996 "num_base_bdevs": 2, 00:07:44.996 "num_base_bdevs_discovered": 0, 00:07:44.996 "num_base_bdevs_operational": 2, 00:07:44.996 "base_bdevs_list": [ 00:07:44.996 { 00:07:44.996 "name": "BaseBdev1", 00:07:44.996 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:44.996 "is_configured": false, 00:07:44.996 "data_offset": 0, 00:07:44.996 "data_size": 0 00:07:44.996 }, 00:07:44.996 { 00:07:44.996 "name": "BaseBdev2", 00:07:44.996 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:44.996 "is_configured": false, 00:07:44.996 "data_offset": 0, 00:07:44.996 "data_size": 0 00:07:44.996 } 00:07:44.997 ] 00:07:44.997 }' 00:07:44.997 13:17:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:44.997 13:17:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.256 13:17:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:45.256 13:17:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.256 13:17:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.256 [2024-11-17 13:17:34.409885] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:45.256 [2024-11-17 13:17:34.409925] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:45.256 13:17:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.256 13:17:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:45.256 13:17:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.256 13:17:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.256 [2024-11-17 13:17:34.421856] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:45.256 [2024-11-17 13:17:34.421900] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:45.256 [2024-11-17 13:17:34.421925] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:45.256 [2024-11-17 13:17:34.421936] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:45.256 13:17:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.256 13:17:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:45.256 13:17:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.256 13:17:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.256 [2024-11-17 13:17:34.468310] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:45.256 BaseBdev1 00:07:45.256 13:17:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.256 13:17:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:45.256 13:17:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:45.256 13:17:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:45.256 13:17:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:45.256 13:17:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:45.256 13:17:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:45.256 13:17:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:45.257 13:17:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.257 13:17:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.516 13:17:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.516 13:17:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:45.516 13:17:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.516 13:17:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.516 [ 00:07:45.516 { 00:07:45.516 "name": "BaseBdev1", 00:07:45.516 "aliases": [ 00:07:45.516 "e2e919cf-69d2-492a-b7df-984a61823355" 00:07:45.516 ], 00:07:45.516 "product_name": "Malloc disk", 00:07:45.516 "block_size": 512, 00:07:45.516 "num_blocks": 65536, 00:07:45.516 "uuid": "e2e919cf-69d2-492a-b7df-984a61823355", 00:07:45.516 "assigned_rate_limits": { 00:07:45.516 "rw_ios_per_sec": 0, 00:07:45.516 "rw_mbytes_per_sec": 0, 00:07:45.516 "r_mbytes_per_sec": 0, 00:07:45.516 "w_mbytes_per_sec": 0 00:07:45.516 }, 00:07:45.516 "claimed": true, 00:07:45.516 "claim_type": "exclusive_write", 00:07:45.516 "zoned": false, 00:07:45.516 "supported_io_types": { 00:07:45.516 "read": true, 00:07:45.516 "write": true, 00:07:45.516 "unmap": true, 00:07:45.516 "flush": true, 00:07:45.516 "reset": true, 00:07:45.516 "nvme_admin": false, 00:07:45.516 "nvme_io": false, 00:07:45.516 "nvme_io_md": false, 00:07:45.516 "write_zeroes": true, 00:07:45.516 "zcopy": true, 00:07:45.516 "get_zone_info": false, 00:07:45.516 "zone_management": false, 00:07:45.516 "zone_append": false, 00:07:45.516 "compare": false, 00:07:45.516 "compare_and_write": false, 00:07:45.516 "abort": true, 00:07:45.516 "seek_hole": false, 00:07:45.516 "seek_data": false, 00:07:45.516 "copy": true, 00:07:45.516 "nvme_iov_md": false 00:07:45.516 }, 00:07:45.516 "memory_domains": [ 00:07:45.516 { 00:07:45.516 "dma_device_id": "system", 00:07:45.516 "dma_device_type": 1 00:07:45.516 }, 00:07:45.516 { 00:07:45.516 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:45.516 "dma_device_type": 2 00:07:45.516 } 00:07:45.516 ], 00:07:45.516 "driver_specific": {} 00:07:45.516 } 00:07:45.516 ] 00:07:45.516 13:17:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.516 13:17:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:45.516 13:17:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:45.516 13:17:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:45.516 13:17:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:45.516 13:17:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:45.516 13:17:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:45.516 13:17:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:45.516 13:17:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:45.516 13:17:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:45.516 13:17:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:45.516 13:17:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:45.516 13:17:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:45.516 13:17:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:45.516 13:17:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.516 13:17:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.516 13:17:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.516 13:17:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:45.516 "name": "Existed_Raid", 00:07:45.516 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:45.516 "strip_size_kb": 0, 00:07:45.516 "state": "configuring", 00:07:45.516 "raid_level": "raid1", 00:07:45.516 "superblock": false, 00:07:45.516 "num_base_bdevs": 2, 00:07:45.516 "num_base_bdevs_discovered": 1, 00:07:45.516 "num_base_bdevs_operational": 2, 00:07:45.516 "base_bdevs_list": [ 00:07:45.516 { 00:07:45.516 "name": "BaseBdev1", 00:07:45.516 "uuid": "e2e919cf-69d2-492a-b7df-984a61823355", 00:07:45.516 "is_configured": true, 00:07:45.516 "data_offset": 0, 00:07:45.516 "data_size": 65536 00:07:45.516 }, 00:07:45.516 { 00:07:45.516 "name": "BaseBdev2", 00:07:45.516 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:45.516 "is_configured": false, 00:07:45.516 "data_offset": 0, 00:07:45.516 "data_size": 0 00:07:45.516 } 00:07:45.516 ] 00:07:45.516 }' 00:07:45.516 13:17:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:45.516 13:17:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.810 13:17:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:45.810 13:17:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.810 13:17:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.810 [2024-11-17 13:17:34.915563] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:45.810 [2024-11-17 13:17:34.915677] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:45.810 13:17:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.810 13:17:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:45.810 13:17:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.810 13:17:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.810 [2024-11-17 13:17:34.927580] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:45.810 [2024-11-17 13:17:34.929459] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:45.810 [2024-11-17 13:17:34.929560] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:45.810 13:17:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.810 13:17:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:45.810 13:17:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:45.810 13:17:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:45.810 13:17:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:45.810 13:17:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:45.810 13:17:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:45.810 13:17:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:45.810 13:17:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:45.810 13:17:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:45.810 13:17:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:45.810 13:17:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:45.810 13:17:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:45.810 13:17:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:45.810 13:17:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:45.810 13:17:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.810 13:17:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.810 13:17:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.810 13:17:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:45.810 "name": "Existed_Raid", 00:07:45.810 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:45.810 "strip_size_kb": 0, 00:07:45.810 "state": "configuring", 00:07:45.810 "raid_level": "raid1", 00:07:45.810 "superblock": false, 00:07:45.810 "num_base_bdevs": 2, 00:07:45.810 "num_base_bdevs_discovered": 1, 00:07:45.810 "num_base_bdevs_operational": 2, 00:07:45.810 "base_bdevs_list": [ 00:07:45.810 { 00:07:45.810 "name": "BaseBdev1", 00:07:45.810 "uuid": "e2e919cf-69d2-492a-b7df-984a61823355", 00:07:45.810 "is_configured": true, 00:07:45.810 "data_offset": 0, 00:07:45.810 "data_size": 65536 00:07:45.810 }, 00:07:45.810 { 00:07:45.810 "name": "BaseBdev2", 00:07:45.810 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:45.810 "is_configured": false, 00:07:45.810 "data_offset": 0, 00:07:45.810 "data_size": 0 00:07:45.810 } 00:07:45.810 ] 00:07:45.810 }' 00:07:45.810 13:17:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:45.810 13:17:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.389 13:17:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:46.389 13:17:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.389 13:17:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.389 [2024-11-17 13:17:35.420756] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:46.389 [2024-11-17 13:17:35.420875] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:46.389 [2024-11-17 13:17:35.420900] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:07:46.389 [2024-11-17 13:17:35.421225] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:46.389 [2024-11-17 13:17:35.421445] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:46.389 [2024-11-17 13:17:35.421495] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:46.389 [2024-11-17 13:17:35.421808] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:46.389 BaseBdev2 00:07:46.389 13:17:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.389 13:17:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:46.389 13:17:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:46.389 13:17:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:46.389 13:17:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:46.389 13:17:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:46.389 13:17:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:46.389 13:17:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:46.389 13:17:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.389 13:17:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.389 13:17:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.389 13:17:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:46.389 13:17:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.389 13:17:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.389 [ 00:07:46.389 { 00:07:46.389 "name": "BaseBdev2", 00:07:46.389 "aliases": [ 00:07:46.389 "9397fd7a-92a4-463c-b4db-31abfdfe9dce" 00:07:46.389 ], 00:07:46.389 "product_name": "Malloc disk", 00:07:46.389 "block_size": 512, 00:07:46.389 "num_blocks": 65536, 00:07:46.389 "uuid": "9397fd7a-92a4-463c-b4db-31abfdfe9dce", 00:07:46.389 "assigned_rate_limits": { 00:07:46.389 "rw_ios_per_sec": 0, 00:07:46.389 "rw_mbytes_per_sec": 0, 00:07:46.389 "r_mbytes_per_sec": 0, 00:07:46.389 "w_mbytes_per_sec": 0 00:07:46.389 }, 00:07:46.389 "claimed": true, 00:07:46.389 "claim_type": "exclusive_write", 00:07:46.389 "zoned": false, 00:07:46.389 "supported_io_types": { 00:07:46.389 "read": true, 00:07:46.389 "write": true, 00:07:46.389 "unmap": true, 00:07:46.389 "flush": true, 00:07:46.389 "reset": true, 00:07:46.389 "nvme_admin": false, 00:07:46.389 "nvme_io": false, 00:07:46.389 "nvme_io_md": false, 00:07:46.389 "write_zeroes": true, 00:07:46.389 "zcopy": true, 00:07:46.389 "get_zone_info": false, 00:07:46.389 "zone_management": false, 00:07:46.389 "zone_append": false, 00:07:46.389 "compare": false, 00:07:46.389 "compare_and_write": false, 00:07:46.389 "abort": true, 00:07:46.389 "seek_hole": false, 00:07:46.389 "seek_data": false, 00:07:46.389 "copy": true, 00:07:46.389 "nvme_iov_md": false 00:07:46.389 }, 00:07:46.389 "memory_domains": [ 00:07:46.389 { 00:07:46.389 "dma_device_id": "system", 00:07:46.389 "dma_device_type": 1 00:07:46.389 }, 00:07:46.389 { 00:07:46.389 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:46.389 "dma_device_type": 2 00:07:46.389 } 00:07:46.389 ], 00:07:46.389 "driver_specific": {} 00:07:46.389 } 00:07:46.389 ] 00:07:46.389 13:17:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.389 13:17:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:46.389 13:17:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:46.389 13:17:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:46.389 13:17:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:07:46.389 13:17:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:46.389 13:17:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:46.389 13:17:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:46.389 13:17:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:46.389 13:17:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:46.389 13:17:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:46.389 13:17:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:46.389 13:17:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:46.389 13:17:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:46.389 13:17:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:46.389 13:17:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:46.389 13:17:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.389 13:17:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.389 13:17:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.389 13:17:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:46.389 "name": "Existed_Raid", 00:07:46.389 "uuid": "67fc82fb-55c0-4118-af5e-5c69847c0d46", 00:07:46.389 "strip_size_kb": 0, 00:07:46.389 "state": "online", 00:07:46.389 "raid_level": "raid1", 00:07:46.389 "superblock": false, 00:07:46.389 "num_base_bdevs": 2, 00:07:46.389 "num_base_bdevs_discovered": 2, 00:07:46.389 "num_base_bdevs_operational": 2, 00:07:46.389 "base_bdevs_list": [ 00:07:46.389 { 00:07:46.389 "name": "BaseBdev1", 00:07:46.389 "uuid": "e2e919cf-69d2-492a-b7df-984a61823355", 00:07:46.389 "is_configured": true, 00:07:46.389 "data_offset": 0, 00:07:46.389 "data_size": 65536 00:07:46.389 }, 00:07:46.389 { 00:07:46.389 "name": "BaseBdev2", 00:07:46.389 "uuid": "9397fd7a-92a4-463c-b4db-31abfdfe9dce", 00:07:46.389 "is_configured": true, 00:07:46.389 "data_offset": 0, 00:07:46.389 "data_size": 65536 00:07:46.389 } 00:07:46.389 ] 00:07:46.389 }' 00:07:46.389 13:17:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:46.389 13:17:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.959 13:17:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:46.959 13:17:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:46.959 13:17:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:46.959 13:17:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:46.959 13:17:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:46.959 13:17:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:46.959 13:17:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:46.959 13:17:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.959 13:17:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:46.959 13:17:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.959 [2024-11-17 13:17:35.900285] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:46.959 13:17:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.959 13:17:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:46.959 "name": "Existed_Raid", 00:07:46.959 "aliases": [ 00:07:46.959 "67fc82fb-55c0-4118-af5e-5c69847c0d46" 00:07:46.959 ], 00:07:46.959 "product_name": "Raid Volume", 00:07:46.959 "block_size": 512, 00:07:46.959 "num_blocks": 65536, 00:07:46.959 "uuid": "67fc82fb-55c0-4118-af5e-5c69847c0d46", 00:07:46.959 "assigned_rate_limits": { 00:07:46.959 "rw_ios_per_sec": 0, 00:07:46.959 "rw_mbytes_per_sec": 0, 00:07:46.959 "r_mbytes_per_sec": 0, 00:07:46.959 "w_mbytes_per_sec": 0 00:07:46.959 }, 00:07:46.959 "claimed": false, 00:07:46.959 "zoned": false, 00:07:46.959 "supported_io_types": { 00:07:46.959 "read": true, 00:07:46.959 "write": true, 00:07:46.959 "unmap": false, 00:07:46.959 "flush": false, 00:07:46.959 "reset": true, 00:07:46.959 "nvme_admin": false, 00:07:46.959 "nvme_io": false, 00:07:46.959 "nvme_io_md": false, 00:07:46.959 "write_zeroes": true, 00:07:46.959 "zcopy": false, 00:07:46.959 "get_zone_info": false, 00:07:46.959 "zone_management": false, 00:07:46.959 "zone_append": false, 00:07:46.959 "compare": false, 00:07:46.959 "compare_and_write": false, 00:07:46.959 "abort": false, 00:07:46.959 "seek_hole": false, 00:07:46.959 "seek_data": false, 00:07:46.959 "copy": false, 00:07:46.959 "nvme_iov_md": false 00:07:46.959 }, 00:07:46.959 "memory_domains": [ 00:07:46.959 { 00:07:46.959 "dma_device_id": "system", 00:07:46.959 "dma_device_type": 1 00:07:46.959 }, 00:07:46.959 { 00:07:46.959 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:46.959 "dma_device_type": 2 00:07:46.959 }, 00:07:46.959 { 00:07:46.959 "dma_device_id": "system", 00:07:46.959 "dma_device_type": 1 00:07:46.959 }, 00:07:46.959 { 00:07:46.959 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:46.959 "dma_device_type": 2 00:07:46.959 } 00:07:46.959 ], 00:07:46.959 "driver_specific": { 00:07:46.959 "raid": { 00:07:46.959 "uuid": "67fc82fb-55c0-4118-af5e-5c69847c0d46", 00:07:46.959 "strip_size_kb": 0, 00:07:46.959 "state": "online", 00:07:46.959 "raid_level": "raid1", 00:07:46.959 "superblock": false, 00:07:46.959 "num_base_bdevs": 2, 00:07:46.959 "num_base_bdevs_discovered": 2, 00:07:46.960 "num_base_bdevs_operational": 2, 00:07:46.960 "base_bdevs_list": [ 00:07:46.960 { 00:07:46.960 "name": "BaseBdev1", 00:07:46.960 "uuid": "e2e919cf-69d2-492a-b7df-984a61823355", 00:07:46.960 "is_configured": true, 00:07:46.960 "data_offset": 0, 00:07:46.960 "data_size": 65536 00:07:46.960 }, 00:07:46.960 { 00:07:46.960 "name": "BaseBdev2", 00:07:46.960 "uuid": "9397fd7a-92a4-463c-b4db-31abfdfe9dce", 00:07:46.960 "is_configured": true, 00:07:46.960 "data_offset": 0, 00:07:46.960 "data_size": 65536 00:07:46.960 } 00:07:46.960 ] 00:07:46.960 } 00:07:46.960 } 00:07:46.960 }' 00:07:46.960 13:17:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:46.960 13:17:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:46.960 BaseBdev2' 00:07:46.960 13:17:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:46.960 13:17:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:46.960 13:17:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:46.960 13:17:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:46.960 13:17:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:46.960 13:17:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.960 13:17:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.960 13:17:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.960 13:17:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:46.960 13:17:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:46.960 13:17:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:46.960 13:17:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:46.960 13:17:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.960 13:17:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.960 13:17:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:46.960 13:17:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.960 13:17:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:46.960 13:17:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:46.960 13:17:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:46.960 13:17:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.960 13:17:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.960 [2024-11-17 13:17:36.111648] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:47.219 13:17:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.219 13:17:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:47.219 13:17:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:07:47.219 13:17:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:47.219 13:17:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:07:47.219 13:17:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:07:47.219 13:17:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:07:47.219 13:17:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:47.219 13:17:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:47.219 13:17:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:47.219 13:17:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:47.219 13:17:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:47.219 13:17:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:47.219 13:17:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:47.219 13:17:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:47.219 13:17:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:47.219 13:17:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:47.219 13:17:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:47.220 13:17:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.220 13:17:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.220 13:17:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.220 13:17:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:47.220 "name": "Existed_Raid", 00:07:47.220 "uuid": "67fc82fb-55c0-4118-af5e-5c69847c0d46", 00:07:47.220 "strip_size_kb": 0, 00:07:47.220 "state": "online", 00:07:47.220 "raid_level": "raid1", 00:07:47.220 "superblock": false, 00:07:47.220 "num_base_bdevs": 2, 00:07:47.220 "num_base_bdevs_discovered": 1, 00:07:47.220 "num_base_bdevs_operational": 1, 00:07:47.220 "base_bdevs_list": [ 00:07:47.220 { 00:07:47.220 "name": null, 00:07:47.220 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:47.220 "is_configured": false, 00:07:47.220 "data_offset": 0, 00:07:47.220 "data_size": 65536 00:07:47.220 }, 00:07:47.220 { 00:07:47.220 "name": "BaseBdev2", 00:07:47.220 "uuid": "9397fd7a-92a4-463c-b4db-31abfdfe9dce", 00:07:47.220 "is_configured": true, 00:07:47.220 "data_offset": 0, 00:07:47.220 "data_size": 65536 00:07:47.220 } 00:07:47.220 ] 00:07:47.220 }' 00:07:47.220 13:17:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:47.220 13:17:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.479 13:17:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:47.479 13:17:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:47.479 13:17:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:47.479 13:17:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:47.479 13:17:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.479 13:17:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.479 13:17:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.479 13:17:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:47.479 13:17:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:47.479 13:17:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:47.479 13:17:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.479 13:17:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.479 [2024-11-17 13:17:36.689279] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:47.479 [2024-11-17 13:17:36.689419] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:47.739 [2024-11-17 13:17:36.784422] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:47.739 [2024-11-17 13:17:36.784487] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:47.739 [2024-11-17 13:17:36.784505] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:47.739 13:17:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.739 13:17:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:47.739 13:17:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:47.739 13:17:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:47.739 13:17:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.739 13:17:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:47.739 13:17:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.739 13:17:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.739 13:17:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:47.739 13:17:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:47.739 13:17:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:47.739 13:17:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 62650 00:07:47.739 13:17:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 62650 ']' 00:07:47.739 13:17:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 62650 00:07:47.739 13:17:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:07:47.740 13:17:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:47.740 13:17:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62650 00:07:47.740 killing process with pid 62650 00:07:47.740 13:17:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:47.740 13:17:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:47.740 13:17:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62650' 00:07:47.740 13:17:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 62650 00:07:47.740 [2024-11-17 13:17:36.851376] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:47.740 13:17:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 62650 00:07:47.740 [2024-11-17 13:17:36.867760] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:49.120 13:17:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:07:49.120 00:07:49.120 real 0m4.964s 00:07:49.120 user 0m7.180s 00:07:49.120 sys 0m0.768s 00:07:49.120 13:17:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:49.120 13:17:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.120 ************************************ 00:07:49.120 END TEST raid_state_function_test 00:07:49.120 ************************************ 00:07:49.120 13:17:38 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:07:49.120 13:17:38 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:49.120 13:17:38 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:49.120 13:17:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:49.120 ************************************ 00:07:49.120 START TEST raid_state_function_test_sb 00:07:49.120 ************************************ 00:07:49.120 13:17:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:07:49.120 13:17:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:07:49.120 13:17:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:49.120 13:17:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:07:49.120 13:17:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:49.120 13:17:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:49.120 13:17:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:49.120 13:17:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:49.120 13:17:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:49.120 13:17:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:49.120 13:17:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:49.120 13:17:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:49.120 13:17:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:49.120 13:17:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:49.120 13:17:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:49.120 13:17:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:49.120 13:17:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:49.120 13:17:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:49.120 13:17:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:49.120 13:17:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:07:49.120 13:17:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:07:49.120 13:17:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:07:49.120 13:17:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:07:49.120 13:17:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=62903 00:07:49.120 13:17:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:49.120 13:17:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62903' 00:07:49.120 Process raid pid: 62903 00:07:49.120 13:17:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 62903 00:07:49.120 13:17:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 62903 ']' 00:07:49.120 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:49.120 13:17:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:49.120 13:17:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:49.120 13:17:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:49.120 13:17:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:49.120 13:17:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:49.120 [2024-11-17 13:17:38.137364] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:07:49.120 [2024-11-17 13:17:38.137558] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:49.120 [2024-11-17 13:17:38.310163] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.380 [2024-11-17 13:17:38.429307] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.640 [2024-11-17 13:17:38.631232] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:49.640 [2024-11-17 13:17:38.631373] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:49.900 13:17:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:49.900 13:17:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:07:49.900 13:17:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:49.900 13:17:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.900 13:17:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:49.900 [2024-11-17 13:17:38.960869] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:49.900 [2024-11-17 13:17:38.960924] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:49.900 [2024-11-17 13:17:38.960936] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:49.900 [2024-11-17 13:17:38.960946] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:49.900 13:17:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.900 13:17:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:49.900 13:17:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:49.900 13:17:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:49.900 13:17:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:49.900 13:17:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:49.900 13:17:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:49.900 13:17:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:49.900 13:17:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:49.900 13:17:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:49.900 13:17:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:49.900 13:17:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:49.900 13:17:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:49.900 13:17:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.900 13:17:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:49.900 13:17:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.900 13:17:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:49.900 "name": "Existed_Raid", 00:07:49.900 "uuid": "99151ed8-85b6-4cfb-991a-ddc591052855", 00:07:49.900 "strip_size_kb": 0, 00:07:49.900 "state": "configuring", 00:07:49.900 "raid_level": "raid1", 00:07:49.900 "superblock": true, 00:07:49.900 "num_base_bdevs": 2, 00:07:49.900 "num_base_bdevs_discovered": 0, 00:07:49.900 "num_base_bdevs_operational": 2, 00:07:49.900 "base_bdevs_list": [ 00:07:49.900 { 00:07:49.900 "name": "BaseBdev1", 00:07:49.900 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:49.900 "is_configured": false, 00:07:49.900 "data_offset": 0, 00:07:49.900 "data_size": 0 00:07:49.900 }, 00:07:49.900 { 00:07:49.900 "name": "BaseBdev2", 00:07:49.900 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:49.900 "is_configured": false, 00:07:49.900 "data_offset": 0, 00:07:49.900 "data_size": 0 00:07:49.900 } 00:07:49.900 ] 00:07:49.900 }' 00:07:49.900 13:17:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:49.901 13:17:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:50.471 13:17:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:50.471 13:17:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.471 13:17:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:50.471 [2024-11-17 13:17:39.428020] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:50.471 [2024-11-17 13:17:39.428117] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:50.471 13:17:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.471 13:17:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:50.471 13:17:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.471 13:17:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:50.471 [2024-11-17 13:17:39.435997] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:50.471 [2024-11-17 13:17:39.436078] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:50.471 [2024-11-17 13:17:39.436122] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:50.471 [2024-11-17 13:17:39.436162] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:50.471 13:17:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.471 13:17:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:50.471 13:17:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.472 13:17:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:50.472 [2024-11-17 13:17:39.478999] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:50.472 BaseBdev1 00:07:50.472 13:17:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.472 13:17:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:50.472 13:17:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:50.472 13:17:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:50.472 13:17:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:50.472 13:17:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:50.472 13:17:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:50.472 13:17:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:50.472 13:17:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.472 13:17:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:50.472 13:17:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.472 13:17:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:50.472 13:17:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.472 13:17:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:50.472 [ 00:07:50.472 { 00:07:50.472 "name": "BaseBdev1", 00:07:50.472 "aliases": [ 00:07:50.472 "86a8abba-ba1e-4a9e-9042-f415c32ae534" 00:07:50.472 ], 00:07:50.472 "product_name": "Malloc disk", 00:07:50.472 "block_size": 512, 00:07:50.472 "num_blocks": 65536, 00:07:50.472 "uuid": "86a8abba-ba1e-4a9e-9042-f415c32ae534", 00:07:50.472 "assigned_rate_limits": { 00:07:50.472 "rw_ios_per_sec": 0, 00:07:50.472 "rw_mbytes_per_sec": 0, 00:07:50.472 "r_mbytes_per_sec": 0, 00:07:50.472 "w_mbytes_per_sec": 0 00:07:50.472 }, 00:07:50.472 "claimed": true, 00:07:50.472 "claim_type": "exclusive_write", 00:07:50.472 "zoned": false, 00:07:50.472 "supported_io_types": { 00:07:50.472 "read": true, 00:07:50.472 "write": true, 00:07:50.472 "unmap": true, 00:07:50.472 "flush": true, 00:07:50.472 "reset": true, 00:07:50.472 "nvme_admin": false, 00:07:50.472 "nvme_io": false, 00:07:50.472 "nvme_io_md": false, 00:07:50.472 "write_zeroes": true, 00:07:50.472 "zcopy": true, 00:07:50.472 "get_zone_info": false, 00:07:50.472 "zone_management": false, 00:07:50.472 "zone_append": false, 00:07:50.472 "compare": false, 00:07:50.472 "compare_and_write": false, 00:07:50.472 "abort": true, 00:07:50.472 "seek_hole": false, 00:07:50.472 "seek_data": false, 00:07:50.472 "copy": true, 00:07:50.472 "nvme_iov_md": false 00:07:50.472 }, 00:07:50.472 "memory_domains": [ 00:07:50.472 { 00:07:50.472 "dma_device_id": "system", 00:07:50.472 "dma_device_type": 1 00:07:50.472 }, 00:07:50.472 { 00:07:50.472 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:50.472 "dma_device_type": 2 00:07:50.472 } 00:07:50.472 ], 00:07:50.472 "driver_specific": {} 00:07:50.472 } 00:07:50.472 ] 00:07:50.472 13:17:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.472 13:17:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:50.472 13:17:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:50.472 13:17:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:50.472 13:17:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:50.472 13:17:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:50.472 13:17:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:50.472 13:17:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:50.472 13:17:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:50.472 13:17:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:50.472 13:17:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:50.472 13:17:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:50.472 13:17:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:50.472 13:17:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:50.472 13:17:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.472 13:17:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:50.472 13:17:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.472 13:17:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:50.472 "name": "Existed_Raid", 00:07:50.472 "uuid": "ae1f01f1-ad0a-4ae3-9656-3ba3b1d7d830", 00:07:50.472 "strip_size_kb": 0, 00:07:50.472 "state": "configuring", 00:07:50.472 "raid_level": "raid1", 00:07:50.472 "superblock": true, 00:07:50.472 "num_base_bdevs": 2, 00:07:50.472 "num_base_bdevs_discovered": 1, 00:07:50.472 "num_base_bdevs_operational": 2, 00:07:50.472 "base_bdevs_list": [ 00:07:50.472 { 00:07:50.472 "name": "BaseBdev1", 00:07:50.472 "uuid": "86a8abba-ba1e-4a9e-9042-f415c32ae534", 00:07:50.472 "is_configured": true, 00:07:50.472 "data_offset": 2048, 00:07:50.472 "data_size": 63488 00:07:50.472 }, 00:07:50.472 { 00:07:50.472 "name": "BaseBdev2", 00:07:50.472 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:50.472 "is_configured": false, 00:07:50.472 "data_offset": 0, 00:07:50.472 "data_size": 0 00:07:50.472 } 00:07:50.472 ] 00:07:50.472 }' 00:07:50.472 13:17:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:50.472 13:17:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:50.733 13:17:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:50.733 13:17:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.733 13:17:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:50.733 [2024-11-17 13:17:39.922326] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:50.733 [2024-11-17 13:17:39.922440] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:50.733 13:17:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.733 13:17:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:50.733 13:17:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.733 13:17:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:50.733 [2024-11-17 13:17:39.934353] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:50.733 [2024-11-17 13:17:39.936203] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:50.733 [2024-11-17 13:17:39.936308] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:50.733 13:17:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.733 13:17:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:50.733 13:17:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:50.733 13:17:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:50.733 13:17:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:50.733 13:17:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:50.733 13:17:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:50.733 13:17:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:50.733 13:17:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:50.733 13:17:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:50.733 13:17:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:50.733 13:17:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:50.733 13:17:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:50.733 13:17:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:50.733 13:17:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:50.733 13:17:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.733 13:17:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:50.992 13:17:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.992 13:17:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:50.992 "name": "Existed_Raid", 00:07:50.992 "uuid": "dddffeaf-c9de-469e-a7c5-3059cfa0ed01", 00:07:50.992 "strip_size_kb": 0, 00:07:50.992 "state": "configuring", 00:07:50.992 "raid_level": "raid1", 00:07:50.992 "superblock": true, 00:07:50.992 "num_base_bdevs": 2, 00:07:50.992 "num_base_bdevs_discovered": 1, 00:07:50.992 "num_base_bdevs_operational": 2, 00:07:50.992 "base_bdevs_list": [ 00:07:50.992 { 00:07:50.992 "name": "BaseBdev1", 00:07:50.992 "uuid": "86a8abba-ba1e-4a9e-9042-f415c32ae534", 00:07:50.992 "is_configured": true, 00:07:50.992 "data_offset": 2048, 00:07:50.993 "data_size": 63488 00:07:50.993 }, 00:07:50.993 { 00:07:50.993 "name": "BaseBdev2", 00:07:50.993 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:50.993 "is_configured": false, 00:07:50.993 "data_offset": 0, 00:07:50.993 "data_size": 0 00:07:50.993 } 00:07:50.993 ] 00:07:50.993 }' 00:07:50.993 13:17:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:50.993 13:17:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.253 13:17:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:51.253 13:17:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.253 13:17:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.253 BaseBdev2 00:07:51.253 [2024-11-17 13:17:40.430382] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:51.253 [2024-11-17 13:17:40.430646] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:51.253 [2024-11-17 13:17:40.430663] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:51.253 [2024-11-17 13:17:40.430986] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:51.253 [2024-11-17 13:17:40.431143] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:51.253 [2024-11-17 13:17:40.431156] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:51.253 [2024-11-17 13:17:40.431306] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:51.253 13:17:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.253 13:17:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:51.253 13:17:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:51.253 13:17:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:51.253 13:17:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:51.253 13:17:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:51.253 13:17:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:51.253 13:17:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:51.253 13:17:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.253 13:17:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.253 13:17:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.253 13:17:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:51.253 13:17:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.253 13:17:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.253 [ 00:07:51.253 { 00:07:51.253 "name": "BaseBdev2", 00:07:51.253 "aliases": [ 00:07:51.253 "2693c5b3-3960-4977-a52b-5d2c79c0555c" 00:07:51.253 ], 00:07:51.253 "product_name": "Malloc disk", 00:07:51.253 "block_size": 512, 00:07:51.253 "num_blocks": 65536, 00:07:51.253 "uuid": "2693c5b3-3960-4977-a52b-5d2c79c0555c", 00:07:51.253 "assigned_rate_limits": { 00:07:51.253 "rw_ios_per_sec": 0, 00:07:51.253 "rw_mbytes_per_sec": 0, 00:07:51.253 "r_mbytes_per_sec": 0, 00:07:51.253 "w_mbytes_per_sec": 0 00:07:51.253 }, 00:07:51.253 "claimed": true, 00:07:51.253 "claim_type": "exclusive_write", 00:07:51.253 "zoned": false, 00:07:51.253 "supported_io_types": { 00:07:51.253 "read": true, 00:07:51.253 "write": true, 00:07:51.253 "unmap": true, 00:07:51.253 "flush": true, 00:07:51.253 "reset": true, 00:07:51.253 "nvme_admin": false, 00:07:51.253 "nvme_io": false, 00:07:51.253 "nvme_io_md": false, 00:07:51.253 "write_zeroes": true, 00:07:51.253 "zcopy": true, 00:07:51.253 "get_zone_info": false, 00:07:51.253 "zone_management": false, 00:07:51.253 "zone_append": false, 00:07:51.253 "compare": false, 00:07:51.253 "compare_and_write": false, 00:07:51.253 "abort": true, 00:07:51.253 "seek_hole": false, 00:07:51.253 "seek_data": false, 00:07:51.253 "copy": true, 00:07:51.253 "nvme_iov_md": false 00:07:51.253 }, 00:07:51.253 "memory_domains": [ 00:07:51.253 { 00:07:51.253 "dma_device_id": "system", 00:07:51.253 "dma_device_type": 1 00:07:51.253 }, 00:07:51.253 { 00:07:51.253 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:51.253 "dma_device_type": 2 00:07:51.253 } 00:07:51.253 ], 00:07:51.253 "driver_specific": {} 00:07:51.253 } 00:07:51.253 ] 00:07:51.253 13:17:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.253 13:17:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:51.253 13:17:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:51.253 13:17:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:51.253 13:17:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:07:51.253 13:17:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:51.253 13:17:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:51.253 13:17:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:51.253 13:17:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:51.253 13:17:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:51.253 13:17:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:51.253 13:17:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:51.253 13:17:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:51.253 13:17:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:51.253 13:17:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:51.253 13:17:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:51.253 13:17:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.253 13:17:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.513 13:17:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.513 13:17:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:51.513 "name": "Existed_Raid", 00:07:51.513 "uuid": "dddffeaf-c9de-469e-a7c5-3059cfa0ed01", 00:07:51.513 "strip_size_kb": 0, 00:07:51.513 "state": "online", 00:07:51.513 "raid_level": "raid1", 00:07:51.513 "superblock": true, 00:07:51.513 "num_base_bdevs": 2, 00:07:51.513 "num_base_bdevs_discovered": 2, 00:07:51.513 "num_base_bdevs_operational": 2, 00:07:51.513 "base_bdevs_list": [ 00:07:51.513 { 00:07:51.513 "name": "BaseBdev1", 00:07:51.513 "uuid": "86a8abba-ba1e-4a9e-9042-f415c32ae534", 00:07:51.513 "is_configured": true, 00:07:51.513 "data_offset": 2048, 00:07:51.513 "data_size": 63488 00:07:51.513 }, 00:07:51.513 { 00:07:51.513 "name": "BaseBdev2", 00:07:51.513 "uuid": "2693c5b3-3960-4977-a52b-5d2c79c0555c", 00:07:51.513 "is_configured": true, 00:07:51.513 "data_offset": 2048, 00:07:51.513 "data_size": 63488 00:07:51.513 } 00:07:51.513 ] 00:07:51.513 }' 00:07:51.513 13:17:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:51.513 13:17:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.773 13:17:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:51.773 13:17:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:51.773 13:17:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:51.773 13:17:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:51.773 13:17:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:07:51.773 13:17:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:51.773 13:17:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:51.773 13:17:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:51.773 13:17:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.773 13:17:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.773 [2024-11-17 13:17:40.877942] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:51.773 13:17:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.773 13:17:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:51.773 "name": "Existed_Raid", 00:07:51.773 "aliases": [ 00:07:51.773 "dddffeaf-c9de-469e-a7c5-3059cfa0ed01" 00:07:51.773 ], 00:07:51.773 "product_name": "Raid Volume", 00:07:51.773 "block_size": 512, 00:07:51.773 "num_blocks": 63488, 00:07:51.773 "uuid": "dddffeaf-c9de-469e-a7c5-3059cfa0ed01", 00:07:51.773 "assigned_rate_limits": { 00:07:51.773 "rw_ios_per_sec": 0, 00:07:51.773 "rw_mbytes_per_sec": 0, 00:07:51.773 "r_mbytes_per_sec": 0, 00:07:51.773 "w_mbytes_per_sec": 0 00:07:51.773 }, 00:07:51.773 "claimed": false, 00:07:51.773 "zoned": false, 00:07:51.773 "supported_io_types": { 00:07:51.773 "read": true, 00:07:51.773 "write": true, 00:07:51.773 "unmap": false, 00:07:51.773 "flush": false, 00:07:51.773 "reset": true, 00:07:51.773 "nvme_admin": false, 00:07:51.773 "nvme_io": false, 00:07:51.773 "nvme_io_md": false, 00:07:51.773 "write_zeroes": true, 00:07:51.773 "zcopy": false, 00:07:51.773 "get_zone_info": false, 00:07:51.773 "zone_management": false, 00:07:51.773 "zone_append": false, 00:07:51.773 "compare": false, 00:07:51.773 "compare_and_write": false, 00:07:51.773 "abort": false, 00:07:51.773 "seek_hole": false, 00:07:51.773 "seek_data": false, 00:07:51.773 "copy": false, 00:07:51.773 "nvme_iov_md": false 00:07:51.773 }, 00:07:51.773 "memory_domains": [ 00:07:51.773 { 00:07:51.773 "dma_device_id": "system", 00:07:51.773 "dma_device_type": 1 00:07:51.773 }, 00:07:51.773 { 00:07:51.773 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:51.773 "dma_device_type": 2 00:07:51.773 }, 00:07:51.773 { 00:07:51.773 "dma_device_id": "system", 00:07:51.773 "dma_device_type": 1 00:07:51.773 }, 00:07:51.773 { 00:07:51.773 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:51.773 "dma_device_type": 2 00:07:51.773 } 00:07:51.773 ], 00:07:51.773 "driver_specific": { 00:07:51.773 "raid": { 00:07:51.773 "uuid": "dddffeaf-c9de-469e-a7c5-3059cfa0ed01", 00:07:51.773 "strip_size_kb": 0, 00:07:51.774 "state": "online", 00:07:51.774 "raid_level": "raid1", 00:07:51.774 "superblock": true, 00:07:51.774 "num_base_bdevs": 2, 00:07:51.774 "num_base_bdevs_discovered": 2, 00:07:51.774 "num_base_bdevs_operational": 2, 00:07:51.774 "base_bdevs_list": [ 00:07:51.774 { 00:07:51.774 "name": "BaseBdev1", 00:07:51.774 "uuid": "86a8abba-ba1e-4a9e-9042-f415c32ae534", 00:07:51.774 "is_configured": true, 00:07:51.774 "data_offset": 2048, 00:07:51.774 "data_size": 63488 00:07:51.774 }, 00:07:51.774 { 00:07:51.774 "name": "BaseBdev2", 00:07:51.774 "uuid": "2693c5b3-3960-4977-a52b-5d2c79c0555c", 00:07:51.774 "is_configured": true, 00:07:51.774 "data_offset": 2048, 00:07:51.774 "data_size": 63488 00:07:51.774 } 00:07:51.774 ] 00:07:51.774 } 00:07:51.774 } 00:07:51.774 }' 00:07:51.774 13:17:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:51.774 13:17:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:51.774 BaseBdev2' 00:07:51.774 13:17:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:51.774 13:17:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:51.774 13:17:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:52.033 13:17:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:52.033 13:17:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:52.033 13:17:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.033 13:17:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:52.033 13:17:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.033 13:17:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:52.033 13:17:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:52.033 13:17:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:52.033 13:17:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:52.033 13:17:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.033 13:17:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:52.033 13:17:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:52.033 13:17:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.033 13:17:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:52.033 13:17:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:52.033 13:17:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:52.033 13:17:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.033 13:17:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:52.033 [2024-11-17 13:17:41.077353] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:52.033 13:17:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.033 13:17:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:52.033 13:17:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:07:52.033 13:17:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:52.033 13:17:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:07:52.033 13:17:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:07:52.033 13:17:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:07:52.033 13:17:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:52.033 13:17:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:52.034 13:17:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:52.034 13:17:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:52.034 13:17:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:52.034 13:17:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:52.034 13:17:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:52.034 13:17:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:52.034 13:17:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:52.034 13:17:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:52.034 13:17:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:52.034 13:17:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.034 13:17:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:52.034 13:17:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.034 13:17:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:52.034 "name": "Existed_Raid", 00:07:52.034 "uuid": "dddffeaf-c9de-469e-a7c5-3059cfa0ed01", 00:07:52.034 "strip_size_kb": 0, 00:07:52.034 "state": "online", 00:07:52.034 "raid_level": "raid1", 00:07:52.034 "superblock": true, 00:07:52.034 "num_base_bdevs": 2, 00:07:52.034 "num_base_bdevs_discovered": 1, 00:07:52.034 "num_base_bdevs_operational": 1, 00:07:52.034 "base_bdevs_list": [ 00:07:52.034 { 00:07:52.034 "name": null, 00:07:52.034 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:52.034 "is_configured": false, 00:07:52.034 "data_offset": 0, 00:07:52.034 "data_size": 63488 00:07:52.034 }, 00:07:52.034 { 00:07:52.034 "name": "BaseBdev2", 00:07:52.034 "uuid": "2693c5b3-3960-4977-a52b-5d2c79c0555c", 00:07:52.034 "is_configured": true, 00:07:52.034 "data_offset": 2048, 00:07:52.034 "data_size": 63488 00:07:52.034 } 00:07:52.034 ] 00:07:52.034 }' 00:07:52.034 13:17:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:52.034 13:17:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:52.603 13:17:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:52.603 13:17:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:52.603 13:17:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:52.603 13:17:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:52.603 13:17:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.603 13:17:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:52.603 13:17:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.603 13:17:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:52.603 13:17:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:52.603 13:17:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:52.603 13:17:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.603 13:17:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:52.603 [2024-11-17 13:17:41.656687] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:52.603 [2024-11-17 13:17:41.656792] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:52.603 [2024-11-17 13:17:41.752914] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:52.603 [2024-11-17 13:17:41.752964] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:52.603 [2024-11-17 13:17:41.752976] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:52.603 13:17:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.603 13:17:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:52.603 13:17:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:52.603 13:17:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:52.603 13:17:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:52.603 13:17:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.603 13:17:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:52.603 13:17:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.603 13:17:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:52.603 13:17:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:52.603 13:17:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:52.603 13:17:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 62903 00:07:52.603 13:17:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 62903 ']' 00:07:52.603 13:17:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 62903 00:07:52.603 13:17:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:07:52.603 13:17:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:52.603 13:17:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62903 00:07:52.864 killing process with pid 62903 00:07:52.864 13:17:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:52.864 13:17:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:52.864 13:17:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62903' 00:07:52.864 13:17:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 62903 00:07:52.864 [2024-11-17 13:17:41.845483] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:52.864 13:17:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 62903 00:07:52.864 [2024-11-17 13:17:41.861840] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:53.804 13:17:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:07:53.804 00:07:53.804 real 0m4.918s 00:07:53.804 user 0m7.086s 00:07:53.804 sys 0m0.790s 00:07:53.804 13:17:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:53.804 13:17:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:53.804 ************************************ 00:07:53.804 END TEST raid_state_function_test_sb 00:07:53.804 ************************************ 00:07:53.804 13:17:43 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:07:53.804 13:17:43 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:53.804 13:17:43 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:53.804 13:17:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:54.064 ************************************ 00:07:54.064 START TEST raid_superblock_test 00:07:54.064 ************************************ 00:07:54.064 13:17:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:07:54.064 13:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:07:54.064 13:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:07:54.064 13:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:07:54.064 13:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:07:54.064 13:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:07:54.064 13:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:07:54.064 13:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:07:54.064 13:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:07:54.064 13:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:07:54.064 13:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:07:54.064 13:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:07:54.064 13:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:07:54.064 13:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:07:54.064 13:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:07:54.064 13:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:07:54.064 13:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=63155 00:07:54.064 13:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:07:54.064 13:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 63155 00:07:54.064 13:17:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 63155 ']' 00:07:54.064 13:17:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:54.064 13:17:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:54.064 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:54.064 13:17:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:54.064 13:17:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:54.064 13:17:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.064 [2024-11-17 13:17:43.118811] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:07:54.064 [2024-11-17 13:17:43.119025] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63155 ] 00:07:54.324 [2024-11-17 13:17:43.292054] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.324 [2024-11-17 13:17:43.407301] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.584 [2024-11-17 13:17:43.608716] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:54.584 [2024-11-17 13:17:43.608781] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:54.844 13:17:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:54.844 13:17:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:07:54.844 13:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:07:54.844 13:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:54.844 13:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:07:54.844 13:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:07:54.844 13:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:07:54.844 13:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:54.844 13:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:54.844 13:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:54.844 13:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:07:54.844 13:17:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.844 13:17:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.844 malloc1 00:07:54.844 13:17:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.844 13:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:54.844 13:17:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.844 13:17:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.844 [2024-11-17 13:17:44.014520] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:54.844 [2024-11-17 13:17:44.014653] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:54.844 [2024-11-17 13:17:44.014695] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:54.844 [2024-11-17 13:17:44.014728] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:54.844 [2024-11-17 13:17:44.016979] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:54.844 [2024-11-17 13:17:44.017068] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:54.844 pt1 00:07:54.844 13:17:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.844 13:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:54.844 13:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:54.844 13:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:07:54.844 13:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:07:54.844 13:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:07:54.844 13:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:54.844 13:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:54.844 13:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:54.844 13:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:07:54.844 13:17:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.844 13:17:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.844 malloc2 00:07:54.844 13:17:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.844 13:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:54.844 13:17:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.844 13:17:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.105 [2024-11-17 13:17:44.070029] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:55.105 [2024-11-17 13:17:44.070148] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:55.105 [2024-11-17 13:17:44.070212] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:07:55.105 [2024-11-17 13:17:44.070272] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:55.105 [2024-11-17 13:17:44.072669] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:55.105 [2024-11-17 13:17:44.072747] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:55.105 pt2 00:07:55.105 13:17:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.105 13:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:55.105 13:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:55.105 13:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:07:55.105 13:17:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.105 13:17:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.105 [2024-11-17 13:17:44.082062] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:55.105 [2024-11-17 13:17:44.083882] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:55.105 [2024-11-17 13:17:44.084110] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:55.105 [2024-11-17 13:17:44.084159] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:55.105 [2024-11-17 13:17:44.084535] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:55.105 [2024-11-17 13:17:44.084778] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:55.105 [2024-11-17 13:17:44.084829] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:07:55.105 [2024-11-17 13:17:44.085056] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:55.105 13:17:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.105 13:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:07:55.105 13:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:55.105 13:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:55.105 13:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:55.105 13:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:55.105 13:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:55.105 13:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:55.105 13:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:55.105 13:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:55.105 13:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:55.105 13:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:55.105 13:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:55.105 13:17:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.105 13:17:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.105 13:17:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.105 13:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:55.105 "name": "raid_bdev1", 00:07:55.105 "uuid": "9ac01c41-091c-463f-b858-c3bca17330b3", 00:07:55.105 "strip_size_kb": 0, 00:07:55.105 "state": "online", 00:07:55.105 "raid_level": "raid1", 00:07:55.105 "superblock": true, 00:07:55.105 "num_base_bdevs": 2, 00:07:55.106 "num_base_bdevs_discovered": 2, 00:07:55.106 "num_base_bdevs_operational": 2, 00:07:55.106 "base_bdevs_list": [ 00:07:55.106 { 00:07:55.106 "name": "pt1", 00:07:55.106 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:55.106 "is_configured": true, 00:07:55.106 "data_offset": 2048, 00:07:55.106 "data_size": 63488 00:07:55.106 }, 00:07:55.106 { 00:07:55.106 "name": "pt2", 00:07:55.106 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:55.106 "is_configured": true, 00:07:55.106 "data_offset": 2048, 00:07:55.106 "data_size": 63488 00:07:55.106 } 00:07:55.106 ] 00:07:55.106 }' 00:07:55.106 13:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:55.106 13:17:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.372 13:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:07:55.372 13:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:55.372 13:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:55.372 13:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:55.372 13:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:55.372 13:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:55.372 13:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:55.372 13:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:55.372 13:17:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.372 13:17:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.372 [2024-11-17 13:17:44.537573] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:55.372 13:17:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.372 13:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:55.372 "name": "raid_bdev1", 00:07:55.372 "aliases": [ 00:07:55.372 "9ac01c41-091c-463f-b858-c3bca17330b3" 00:07:55.372 ], 00:07:55.372 "product_name": "Raid Volume", 00:07:55.372 "block_size": 512, 00:07:55.372 "num_blocks": 63488, 00:07:55.373 "uuid": "9ac01c41-091c-463f-b858-c3bca17330b3", 00:07:55.373 "assigned_rate_limits": { 00:07:55.373 "rw_ios_per_sec": 0, 00:07:55.373 "rw_mbytes_per_sec": 0, 00:07:55.373 "r_mbytes_per_sec": 0, 00:07:55.373 "w_mbytes_per_sec": 0 00:07:55.373 }, 00:07:55.373 "claimed": false, 00:07:55.373 "zoned": false, 00:07:55.373 "supported_io_types": { 00:07:55.373 "read": true, 00:07:55.373 "write": true, 00:07:55.373 "unmap": false, 00:07:55.373 "flush": false, 00:07:55.373 "reset": true, 00:07:55.373 "nvme_admin": false, 00:07:55.373 "nvme_io": false, 00:07:55.373 "nvme_io_md": false, 00:07:55.373 "write_zeroes": true, 00:07:55.373 "zcopy": false, 00:07:55.373 "get_zone_info": false, 00:07:55.373 "zone_management": false, 00:07:55.373 "zone_append": false, 00:07:55.373 "compare": false, 00:07:55.373 "compare_and_write": false, 00:07:55.373 "abort": false, 00:07:55.373 "seek_hole": false, 00:07:55.373 "seek_data": false, 00:07:55.373 "copy": false, 00:07:55.373 "nvme_iov_md": false 00:07:55.373 }, 00:07:55.373 "memory_domains": [ 00:07:55.373 { 00:07:55.373 "dma_device_id": "system", 00:07:55.373 "dma_device_type": 1 00:07:55.373 }, 00:07:55.373 { 00:07:55.373 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:55.373 "dma_device_type": 2 00:07:55.373 }, 00:07:55.373 { 00:07:55.373 "dma_device_id": "system", 00:07:55.373 "dma_device_type": 1 00:07:55.373 }, 00:07:55.373 { 00:07:55.373 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:55.373 "dma_device_type": 2 00:07:55.373 } 00:07:55.373 ], 00:07:55.373 "driver_specific": { 00:07:55.373 "raid": { 00:07:55.373 "uuid": "9ac01c41-091c-463f-b858-c3bca17330b3", 00:07:55.373 "strip_size_kb": 0, 00:07:55.373 "state": "online", 00:07:55.373 "raid_level": "raid1", 00:07:55.373 "superblock": true, 00:07:55.373 "num_base_bdevs": 2, 00:07:55.373 "num_base_bdevs_discovered": 2, 00:07:55.373 "num_base_bdevs_operational": 2, 00:07:55.373 "base_bdevs_list": [ 00:07:55.373 { 00:07:55.373 "name": "pt1", 00:07:55.373 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:55.373 "is_configured": true, 00:07:55.373 "data_offset": 2048, 00:07:55.373 "data_size": 63488 00:07:55.373 }, 00:07:55.373 { 00:07:55.373 "name": "pt2", 00:07:55.373 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:55.373 "is_configured": true, 00:07:55.373 "data_offset": 2048, 00:07:55.373 "data_size": 63488 00:07:55.373 } 00:07:55.373 ] 00:07:55.373 } 00:07:55.373 } 00:07:55.373 }' 00:07:55.373 13:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:55.640 13:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:55.640 pt2' 00:07:55.640 13:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:55.640 13:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:55.640 13:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:55.640 13:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:55.640 13:17:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.640 13:17:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.640 13:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:55.641 13:17:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.641 13:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:55.641 13:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:55.641 13:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:55.641 13:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:55.641 13:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:55.641 13:17:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.641 13:17:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.641 13:17:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.641 13:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:55.641 13:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:55.641 13:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:55.641 13:17:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.641 13:17:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.641 13:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:07:55.641 [2024-11-17 13:17:44.761174] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:55.641 13:17:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.641 13:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=9ac01c41-091c-463f-b858-c3bca17330b3 00:07:55.641 13:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 9ac01c41-091c-463f-b858-c3bca17330b3 ']' 00:07:55.641 13:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:55.641 13:17:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.641 13:17:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.641 [2024-11-17 13:17:44.796806] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:55.641 [2024-11-17 13:17:44.796833] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:55.641 [2024-11-17 13:17:44.796922] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:55.641 [2024-11-17 13:17:44.796981] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:55.641 [2024-11-17 13:17:44.796993] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:07:55.641 13:17:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.641 13:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:55.641 13:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:07:55.641 13:17:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.641 13:17:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.641 13:17:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.641 13:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:07:55.641 13:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:07:55.641 13:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:55.641 13:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:07:55.641 13:17:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.641 13:17:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.641 13:17:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.641 13:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:55.641 13:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:07:55.641 13:17:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.641 13:17:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.902 13:17:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.902 13:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:07:55.902 13:17:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.902 13:17:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.902 13:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:07:55.902 13:17:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.902 13:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:07:55.902 13:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:55.902 13:17:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:07:55.902 13:17:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:55.902 13:17:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:07:55.902 13:17:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:55.902 13:17:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:07:55.902 13:17:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:55.902 13:17:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:55.902 13:17:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.902 13:17:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.902 [2024-11-17 13:17:44.932634] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:55.902 [2024-11-17 13:17:44.934677] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:55.902 [2024-11-17 13:17:44.934783] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:07:55.902 [2024-11-17 13:17:44.934891] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:07:55.902 [2024-11-17 13:17:44.934949] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:55.902 [2024-11-17 13:17:44.934986] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:07:55.902 request: 00:07:55.902 { 00:07:55.902 "name": "raid_bdev1", 00:07:55.902 "raid_level": "raid1", 00:07:55.902 "base_bdevs": [ 00:07:55.902 "malloc1", 00:07:55.902 "malloc2" 00:07:55.902 ], 00:07:55.902 "superblock": false, 00:07:55.902 "method": "bdev_raid_create", 00:07:55.902 "req_id": 1 00:07:55.902 } 00:07:55.902 Got JSON-RPC error response 00:07:55.902 response: 00:07:55.902 { 00:07:55.902 "code": -17, 00:07:55.902 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:07:55.902 } 00:07:55.902 13:17:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:55.902 13:17:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:07:55.902 13:17:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:55.902 13:17:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:55.902 13:17:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:55.902 13:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:55.902 13:17:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.902 13:17:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.902 13:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:07:55.902 13:17:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.902 13:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:07:55.902 13:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:07:55.902 13:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:55.902 13:17:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.902 13:17:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.902 [2024-11-17 13:17:45.000498] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:55.902 [2024-11-17 13:17:45.000575] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:55.902 [2024-11-17 13:17:45.000595] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:07:55.902 [2024-11-17 13:17:45.000606] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:55.902 [2024-11-17 13:17:45.002780] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:55.902 [2024-11-17 13:17:45.002824] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:55.902 [2024-11-17 13:17:45.002913] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:55.902 [2024-11-17 13:17:45.002982] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:55.902 pt1 00:07:55.902 13:17:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.902 13:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:07:55.902 13:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:55.902 13:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:55.902 13:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:55.902 13:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:55.902 13:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:55.902 13:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:55.902 13:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:55.902 13:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:55.902 13:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:55.902 13:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:55.902 13:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:55.902 13:17:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.902 13:17:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.902 13:17:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.902 13:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:55.902 "name": "raid_bdev1", 00:07:55.902 "uuid": "9ac01c41-091c-463f-b858-c3bca17330b3", 00:07:55.902 "strip_size_kb": 0, 00:07:55.902 "state": "configuring", 00:07:55.902 "raid_level": "raid1", 00:07:55.902 "superblock": true, 00:07:55.902 "num_base_bdevs": 2, 00:07:55.902 "num_base_bdevs_discovered": 1, 00:07:55.902 "num_base_bdevs_operational": 2, 00:07:55.902 "base_bdevs_list": [ 00:07:55.902 { 00:07:55.902 "name": "pt1", 00:07:55.902 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:55.902 "is_configured": true, 00:07:55.902 "data_offset": 2048, 00:07:55.902 "data_size": 63488 00:07:55.902 }, 00:07:55.902 { 00:07:55.902 "name": null, 00:07:55.902 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:55.902 "is_configured": false, 00:07:55.902 "data_offset": 2048, 00:07:55.902 "data_size": 63488 00:07:55.902 } 00:07:55.902 ] 00:07:55.902 }' 00:07:55.902 13:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:55.902 13:17:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.473 13:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:07:56.473 13:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:07:56.473 13:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:56.473 13:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:56.473 13:17:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.473 13:17:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.473 [2024-11-17 13:17:45.439797] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:56.473 [2024-11-17 13:17:45.439962] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:56.473 [2024-11-17 13:17:45.440003] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:07:56.473 [2024-11-17 13:17:45.440035] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:56.473 [2024-11-17 13:17:45.440603] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:56.473 [2024-11-17 13:17:45.440694] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:56.473 [2024-11-17 13:17:45.440834] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:56.473 [2024-11-17 13:17:45.440894] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:56.473 [2024-11-17 13:17:45.441069] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:56.473 [2024-11-17 13:17:45.441114] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:56.473 [2024-11-17 13:17:45.441423] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:07:56.473 [2024-11-17 13:17:45.441655] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:56.473 [2024-11-17 13:17:45.441705] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:56.473 [2024-11-17 13:17:45.441910] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:56.473 pt2 00:07:56.473 13:17:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.473 13:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:07:56.473 13:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:56.473 13:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:07:56.473 13:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:56.473 13:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:56.473 13:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:56.473 13:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:56.473 13:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:56.473 13:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:56.473 13:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:56.473 13:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:56.473 13:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:56.473 13:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:56.473 13:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:56.473 13:17:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.473 13:17:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.473 13:17:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.473 13:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:56.473 "name": "raid_bdev1", 00:07:56.473 "uuid": "9ac01c41-091c-463f-b858-c3bca17330b3", 00:07:56.473 "strip_size_kb": 0, 00:07:56.473 "state": "online", 00:07:56.473 "raid_level": "raid1", 00:07:56.473 "superblock": true, 00:07:56.473 "num_base_bdevs": 2, 00:07:56.473 "num_base_bdevs_discovered": 2, 00:07:56.473 "num_base_bdevs_operational": 2, 00:07:56.473 "base_bdevs_list": [ 00:07:56.473 { 00:07:56.473 "name": "pt1", 00:07:56.473 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:56.473 "is_configured": true, 00:07:56.473 "data_offset": 2048, 00:07:56.473 "data_size": 63488 00:07:56.473 }, 00:07:56.473 { 00:07:56.473 "name": "pt2", 00:07:56.473 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:56.473 "is_configured": true, 00:07:56.473 "data_offset": 2048, 00:07:56.473 "data_size": 63488 00:07:56.473 } 00:07:56.473 ] 00:07:56.473 }' 00:07:56.473 13:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:56.473 13:17:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.734 13:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:07:56.734 13:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:56.734 13:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:56.734 13:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:56.734 13:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:56.734 13:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:56.734 13:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:56.734 13:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:56.734 13:17:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.734 13:17:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.734 [2024-11-17 13:17:45.903215] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:56.734 13:17:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.734 13:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:56.734 "name": "raid_bdev1", 00:07:56.734 "aliases": [ 00:07:56.734 "9ac01c41-091c-463f-b858-c3bca17330b3" 00:07:56.734 ], 00:07:56.734 "product_name": "Raid Volume", 00:07:56.734 "block_size": 512, 00:07:56.734 "num_blocks": 63488, 00:07:56.734 "uuid": "9ac01c41-091c-463f-b858-c3bca17330b3", 00:07:56.734 "assigned_rate_limits": { 00:07:56.734 "rw_ios_per_sec": 0, 00:07:56.734 "rw_mbytes_per_sec": 0, 00:07:56.734 "r_mbytes_per_sec": 0, 00:07:56.734 "w_mbytes_per_sec": 0 00:07:56.734 }, 00:07:56.734 "claimed": false, 00:07:56.734 "zoned": false, 00:07:56.734 "supported_io_types": { 00:07:56.734 "read": true, 00:07:56.734 "write": true, 00:07:56.734 "unmap": false, 00:07:56.734 "flush": false, 00:07:56.734 "reset": true, 00:07:56.734 "nvme_admin": false, 00:07:56.734 "nvme_io": false, 00:07:56.734 "nvme_io_md": false, 00:07:56.734 "write_zeroes": true, 00:07:56.734 "zcopy": false, 00:07:56.734 "get_zone_info": false, 00:07:56.734 "zone_management": false, 00:07:56.734 "zone_append": false, 00:07:56.734 "compare": false, 00:07:56.734 "compare_and_write": false, 00:07:56.734 "abort": false, 00:07:56.734 "seek_hole": false, 00:07:56.734 "seek_data": false, 00:07:56.734 "copy": false, 00:07:56.734 "nvme_iov_md": false 00:07:56.734 }, 00:07:56.734 "memory_domains": [ 00:07:56.734 { 00:07:56.734 "dma_device_id": "system", 00:07:56.734 "dma_device_type": 1 00:07:56.734 }, 00:07:56.734 { 00:07:56.734 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:56.734 "dma_device_type": 2 00:07:56.734 }, 00:07:56.734 { 00:07:56.734 "dma_device_id": "system", 00:07:56.734 "dma_device_type": 1 00:07:56.734 }, 00:07:56.734 { 00:07:56.734 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:56.734 "dma_device_type": 2 00:07:56.734 } 00:07:56.734 ], 00:07:56.734 "driver_specific": { 00:07:56.734 "raid": { 00:07:56.734 "uuid": "9ac01c41-091c-463f-b858-c3bca17330b3", 00:07:56.734 "strip_size_kb": 0, 00:07:56.734 "state": "online", 00:07:56.734 "raid_level": "raid1", 00:07:56.734 "superblock": true, 00:07:56.734 "num_base_bdevs": 2, 00:07:56.734 "num_base_bdevs_discovered": 2, 00:07:56.734 "num_base_bdevs_operational": 2, 00:07:56.734 "base_bdevs_list": [ 00:07:56.734 { 00:07:56.734 "name": "pt1", 00:07:56.734 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:56.734 "is_configured": true, 00:07:56.734 "data_offset": 2048, 00:07:56.734 "data_size": 63488 00:07:56.734 }, 00:07:56.734 { 00:07:56.734 "name": "pt2", 00:07:56.734 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:56.734 "is_configured": true, 00:07:56.734 "data_offset": 2048, 00:07:56.734 "data_size": 63488 00:07:56.734 } 00:07:56.734 ] 00:07:56.734 } 00:07:56.734 } 00:07:56.734 }' 00:07:56.734 13:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:56.995 13:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:56.995 pt2' 00:07:56.995 13:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:56.995 13:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:56.995 13:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:56.995 13:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:56.995 13:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:56.995 13:17:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.995 13:17:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.995 13:17:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.995 13:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:56.995 13:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:56.995 13:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:56.995 13:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:56.995 13:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:56.995 13:17:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.995 13:17:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.995 13:17:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.995 13:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:56.995 13:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:56.995 13:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:56.995 13:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:07:56.995 13:17:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.995 13:17:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.995 [2024-11-17 13:17:46.110836] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:56.995 13:17:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.995 13:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 9ac01c41-091c-463f-b858-c3bca17330b3 '!=' 9ac01c41-091c-463f-b858-c3bca17330b3 ']' 00:07:56.995 13:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:07:56.995 13:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:56.995 13:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:07:56.995 13:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:07:56.995 13:17:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.995 13:17:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.995 [2024-11-17 13:17:46.154555] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:07:56.995 13:17:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.995 13:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:07:56.995 13:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:56.995 13:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:56.995 13:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:56.995 13:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:56.995 13:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:56.995 13:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:56.995 13:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:56.995 13:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:56.995 13:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:56.995 13:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:56.995 13:17:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.995 13:17:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.995 13:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:56.995 13:17:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.995 13:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:56.995 "name": "raid_bdev1", 00:07:56.995 "uuid": "9ac01c41-091c-463f-b858-c3bca17330b3", 00:07:56.995 "strip_size_kb": 0, 00:07:56.995 "state": "online", 00:07:56.995 "raid_level": "raid1", 00:07:56.995 "superblock": true, 00:07:56.995 "num_base_bdevs": 2, 00:07:56.995 "num_base_bdevs_discovered": 1, 00:07:56.995 "num_base_bdevs_operational": 1, 00:07:56.995 "base_bdevs_list": [ 00:07:56.995 { 00:07:56.995 "name": null, 00:07:56.995 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:56.995 "is_configured": false, 00:07:56.995 "data_offset": 0, 00:07:56.995 "data_size": 63488 00:07:56.995 }, 00:07:56.995 { 00:07:56.995 "name": "pt2", 00:07:56.995 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:56.995 "is_configured": true, 00:07:56.995 "data_offset": 2048, 00:07:56.995 "data_size": 63488 00:07:56.995 } 00:07:56.995 ] 00:07:56.995 }' 00:07:56.995 13:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:56.995 13:17:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.565 13:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:57.565 13:17:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.565 13:17:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.565 [2024-11-17 13:17:46.629732] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:57.565 [2024-11-17 13:17:46.629814] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:57.565 [2024-11-17 13:17:46.629913] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:57.565 [2024-11-17 13:17:46.630032] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:57.565 [2024-11-17 13:17:46.630085] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:57.565 13:17:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.565 13:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:57.565 13:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:07:57.565 13:17:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.565 13:17:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.565 13:17:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.565 13:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:07:57.565 13:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:07:57.565 13:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:07:57.565 13:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:07:57.565 13:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:07:57.565 13:17:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.565 13:17:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.565 13:17:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.565 13:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:07:57.565 13:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:07:57.565 13:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:07:57.565 13:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:07:57.565 13:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=1 00:07:57.565 13:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:57.565 13:17:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.565 13:17:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.565 [2024-11-17 13:17:46.705571] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:57.565 [2024-11-17 13:17:46.705639] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:57.565 [2024-11-17 13:17:46.705658] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:07:57.565 [2024-11-17 13:17:46.705669] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:57.565 [2024-11-17 13:17:46.707886] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:57.565 [2024-11-17 13:17:46.707928] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:57.565 [2024-11-17 13:17:46.708024] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:57.565 [2024-11-17 13:17:46.708070] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:57.565 [2024-11-17 13:17:46.708175] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:07:57.565 [2024-11-17 13:17:46.708187] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:57.565 [2024-11-17 13:17:46.708440] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:07:57.565 [2024-11-17 13:17:46.708648] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:07:57.565 [2024-11-17 13:17:46.708660] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:07:57.565 [2024-11-17 13:17:46.708793] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:57.565 pt2 00:07:57.565 13:17:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.565 13:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:07:57.565 13:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:57.566 13:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:57.566 13:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:57.566 13:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:57.566 13:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:57.566 13:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:57.566 13:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:57.566 13:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:57.566 13:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:57.566 13:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:57.566 13:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:57.566 13:17:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.566 13:17:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.566 13:17:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.566 13:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:57.566 "name": "raid_bdev1", 00:07:57.566 "uuid": "9ac01c41-091c-463f-b858-c3bca17330b3", 00:07:57.566 "strip_size_kb": 0, 00:07:57.566 "state": "online", 00:07:57.566 "raid_level": "raid1", 00:07:57.566 "superblock": true, 00:07:57.566 "num_base_bdevs": 2, 00:07:57.566 "num_base_bdevs_discovered": 1, 00:07:57.566 "num_base_bdevs_operational": 1, 00:07:57.566 "base_bdevs_list": [ 00:07:57.566 { 00:07:57.566 "name": null, 00:07:57.566 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:57.566 "is_configured": false, 00:07:57.566 "data_offset": 2048, 00:07:57.566 "data_size": 63488 00:07:57.566 }, 00:07:57.566 { 00:07:57.566 "name": "pt2", 00:07:57.566 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:57.566 "is_configured": true, 00:07:57.566 "data_offset": 2048, 00:07:57.566 "data_size": 63488 00:07:57.566 } 00:07:57.566 ] 00:07:57.566 }' 00:07:57.566 13:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:57.566 13:17:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.135 13:17:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:58.135 13:17:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.135 13:17:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.135 [2024-11-17 13:17:47.108888] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:58.135 [2024-11-17 13:17:47.108967] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:58.135 [2024-11-17 13:17:47.109085] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:58.135 [2024-11-17 13:17:47.109170] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:58.135 [2024-11-17 13:17:47.109238] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:07:58.135 13:17:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.135 13:17:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:58.135 13:17:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:07:58.135 13:17:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.135 13:17:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.135 13:17:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.135 13:17:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:07:58.135 13:17:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:07:58.135 13:17:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:07:58.135 13:17:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:58.135 13:17:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.135 13:17:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.135 [2024-11-17 13:17:47.176785] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:58.135 [2024-11-17 13:17:47.176894] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:58.135 [2024-11-17 13:17:47.176948] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:07:58.135 [2024-11-17 13:17:47.176983] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:58.135 [2024-11-17 13:17:47.179214] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:58.136 [2024-11-17 13:17:47.179300] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:58.136 [2024-11-17 13:17:47.179442] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:58.136 [2024-11-17 13:17:47.179523] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:58.136 [2024-11-17 13:17:47.179703] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:07:58.136 [2024-11-17 13:17:47.179758] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:58.136 [2024-11-17 13:17:47.179803] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:07:58.136 [2024-11-17 13:17:47.179941] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:58.136 [2024-11-17 13:17:47.180070] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:07:58.136 [2024-11-17 13:17:47.180110] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:58.136 [2024-11-17 13:17:47.180407] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:07:58.136 [2024-11-17 13:17:47.180636] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:07:58.136 [2024-11-17 13:17:47.180689] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:07:58.136 [2024-11-17 13:17:47.180933] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:58.136 pt1 00:07:58.136 13:17:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.136 13:17:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:07:58.136 13:17:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:07:58.136 13:17:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:58.136 13:17:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:58.136 13:17:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:58.136 13:17:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:58.136 13:17:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:58.136 13:17:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:58.136 13:17:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:58.136 13:17:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:58.136 13:17:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:58.136 13:17:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:58.136 13:17:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:58.136 13:17:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.136 13:17:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.136 13:17:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.136 13:17:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:58.136 "name": "raid_bdev1", 00:07:58.136 "uuid": "9ac01c41-091c-463f-b858-c3bca17330b3", 00:07:58.136 "strip_size_kb": 0, 00:07:58.136 "state": "online", 00:07:58.136 "raid_level": "raid1", 00:07:58.136 "superblock": true, 00:07:58.136 "num_base_bdevs": 2, 00:07:58.136 "num_base_bdevs_discovered": 1, 00:07:58.136 "num_base_bdevs_operational": 1, 00:07:58.136 "base_bdevs_list": [ 00:07:58.136 { 00:07:58.136 "name": null, 00:07:58.136 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:58.136 "is_configured": false, 00:07:58.136 "data_offset": 2048, 00:07:58.136 "data_size": 63488 00:07:58.136 }, 00:07:58.136 { 00:07:58.136 "name": "pt2", 00:07:58.136 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:58.136 "is_configured": true, 00:07:58.136 "data_offset": 2048, 00:07:58.136 "data_size": 63488 00:07:58.136 } 00:07:58.136 ] 00:07:58.136 }' 00:07:58.136 13:17:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:58.136 13:17:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.396 13:17:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:07:58.396 13:17:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:07:58.396 13:17:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.396 13:17:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.655 13:17:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.655 13:17:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:07:58.655 13:17:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:58.655 13:17:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.655 13:17:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.655 [2024-11-17 13:17:47.640335] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:58.655 13:17:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:07:58.655 13:17:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.655 13:17:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 9ac01c41-091c-463f-b858-c3bca17330b3 '!=' 9ac01c41-091c-463f-b858-c3bca17330b3 ']' 00:07:58.655 13:17:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 63155 00:07:58.655 13:17:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 63155 ']' 00:07:58.655 13:17:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 63155 00:07:58.655 13:17:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:07:58.655 13:17:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:58.655 13:17:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63155 00:07:58.655 13:17:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:58.655 13:17:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:58.655 13:17:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63155' 00:07:58.655 killing process with pid 63155 00:07:58.655 13:17:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 63155 00:07:58.655 [2024-11-17 13:17:47.720532] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:58.655 [2024-11-17 13:17:47.720701] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:58.656 13:17:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 63155 00:07:58.656 [2024-11-17 13:17:47.720811] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:58.656 [2024-11-17 13:17:47.720866] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:07:58.915 [2024-11-17 13:17:47.923501] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:59.855 13:17:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:07:59.855 ************************************ 00:07:59.855 END TEST raid_superblock_test 00:07:59.855 ************************************ 00:07:59.855 00:07:59.855 real 0m5.976s 00:07:59.855 user 0m9.042s 00:07:59.855 sys 0m1.045s 00:07:59.855 13:17:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:59.855 13:17:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.855 13:17:49 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 2 read 00:07:59.855 13:17:49 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:59.855 13:17:49 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:59.855 13:17:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:59.855 ************************************ 00:07:59.855 START TEST raid_read_error_test 00:07:59.855 ************************************ 00:07:59.855 13:17:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 2 read 00:07:59.855 13:17:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:07:59.855 13:17:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:59.855 13:17:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:08:00.114 13:17:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:00.114 13:17:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:00.114 13:17:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:00.114 13:17:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:00.114 13:17:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:00.114 13:17:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:00.114 13:17:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:00.114 13:17:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:00.114 13:17:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:00.114 13:17:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:00.114 13:17:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:00.114 13:17:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:00.114 13:17:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:00.114 13:17:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:00.114 13:17:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:00.114 13:17:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:08:00.114 13:17:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:08:00.114 13:17:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:00.114 13:17:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.kcIgWEllKi 00:08:00.114 13:17:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63480 00:08:00.114 13:17:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:00.114 13:17:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63480 00:08:00.114 13:17:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 63480 ']' 00:08:00.114 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:00.114 13:17:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:00.114 13:17:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:00.114 13:17:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:00.114 13:17:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:00.114 13:17:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.114 [2024-11-17 13:17:49.178126] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:08:00.114 [2024-11-17 13:17:49.178312] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63480 ] 00:08:00.114 [2024-11-17 13:17:49.331436] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:00.516 [2024-11-17 13:17:49.444885] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:00.516 [2024-11-17 13:17:49.647582] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:00.516 [2024-11-17 13:17:49.647625] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:01.085 13:17:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:01.085 13:17:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:01.085 13:17:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:01.085 13:17:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:01.085 13:17:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.085 13:17:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.085 BaseBdev1_malloc 00:08:01.085 13:17:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.085 13:17:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:01.085 13:17:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.085 13:17:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.085 true 00:08:01.085 13:17:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.085 13:17:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:01.085 13:17:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.085 13:17:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.085 [2024-11-17 13:17:50.072673] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:01.085 [2024-11-17 13:17:50.072734] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:01.085 [2024-11-17 13:17:50.072755] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:01.085 [2024-11-17 13:17:50.072766] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:01.085 [2024-11-17 13:17:50.074950] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:01.085 [2024-11-17 13:17:50.075004] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:01.085 BaseBdev1 00:08:01.085 13:17:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.085 13:17:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:01.085 13:17:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:01.085 13:17:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.085 13:17:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.085 BaseBdev2_malloc 00:08:01.085 13:17:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.085 13:17:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:01.085 13:17:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.085 13:17:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.085 true 00:08:01.085 13:17:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.085 13:17:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:01.085 13:17:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.085 13:17:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.085 [2024-11-17 13:17:50.139398] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:01.085 [2024-11-17 13:17:50.139450] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:01.085 [2024-11-17 13:17:50.139482] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:01.085 [2024-11-17 13:17:50.139492] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:01.085 [2024-11-17 13:17:50.141608] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:01.085 [2024-11-17 13:17:50.141659] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:01.085 BaseBdev2 00:08:01.085 13:17:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.085 13:17:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:01.085 13:17:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.085 13:17:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.085 [2024-11-17 13:17:50.151433] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:01.085 [2024-11-17 13:17:50.153266] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:01.085 [2024-11-17 13:17:50.153509] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:01.085 [2024-11-17 13:17:50.153563] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:01.085 [2024-11-17 13:17:50.153823] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:01.086 [2024-11-17 13:17:50.154040] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:01.086 [2024-11-17 13:17:50.154085] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:01.086 [2024-11-17 13:17:50.154320] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:01.086 13:17:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.086 13:17:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:01.086 13:17:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:01.086 13:17:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:01.086 13:17:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:01.086 13:17:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:01.086 13:17:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:01.086 13:17:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:01.086 13:17:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:01.086 13:17:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:01.086 13:17:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:01.086 13:17:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:01.086 13:17:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:01.086 13:17:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.086 13:17:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.086 13:17:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.086 13:17:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:01.086 "name": "raid_bdev1", 00:08:01.086 "uuid": "d6af3b16-54e7-415b-b285-a619214e01be", 00:08:01.086 "strip_size_kb": 0, 00:08:01.086 "state": "online", 00:08:01.086 "raid_level": "raid1", 00:08:01.086 "superblock": true, 00:08:01.086 "num_base_bdevs": 2, 00:08:01.086 "num_base_bdevs_discovered": 2, 00:08:01.086 "num_base_bdevs_operational": 2, 00:08:01.086 "base_bdevs_list": [ 00:08:01.086 { 00:08:01.086 "name": "BaseBdev1", 00:08:01.086 "uuid": "ec064fef-4cbe-512f-8673-762c7278e20d", 00:08:01.086 "is_configured": true, 00:08:01.086 "data_offset": 2048, 00:08:01.086 "data_size": 63488 00:08:01.086 }, 00:08:01.086 { 00:08:01.086 "name": "BaseBdev2", 00:08:01.086 "uuid": "7c882355-ff2a-575a-bc8c-7a03da07e38a", 00:08:01.086 "is_configured": true, 00:08:01.086 "data_offset": 2048, 00:08:01.086 "data_size": 63488 00:08:01.086 } 00:08:01.086 ] 00:08:01.086 }' 00:08:01.086 13:17:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:01.086 13:17:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.654 13:17:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:01.654 13:17:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:01.654 [2024-11-17 13:17:50.739828] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:08:02.593 13:17:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:02.593 13:17:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.593 13:17:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.593 13:17:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.593 13:17:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:02.593 13:17:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:08:02.593 13:17:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:08:02.593 13:17:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:08:02.593 13:17:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:02.593 13:17:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:02.593 13:17:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:02.593 13:17:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:02.593 13:17:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:02.593 13:17:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:02.593 13:17:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:02.593 13:17:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:02.593 13:17:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:02.593 13:17:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:02.593 13:17:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:02.593 13:17:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:02.593 13:17:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.593 13:17:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.593 13:17:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.593 13:17:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:02.593 "name": "raid_bdev1", 00:08:02.593 "uuid": "d6af3b16-54e7-415b-b285-a619214e01be", 00:08:02.593 "strip_size_kb": 0, 00:08:02.593 "state": "online", 00:08:02.593 "raid_level": "raid1", 00:08:02.593 "superblock": true, 00:08:02.593 "num_base_bdevs": 2, 00:08:02.593 "num_base_bdevs_discovered": 2, 00:08:02.594 "num_base_bdevs_operational": 2, 00:08:02.594 "base_bdevs_list": [ 00:08:02.594 { 00:08:02.594 "name": "BaseBdev1", 00:08:02.594 "uuid": "ec064fef-4cbe-512f-8673-762c7278e20d", 00:08:02.594 "is_configured": true, 00:08:02.594 "data_offset": 2048, 00:08:02.594 "data_size": 63488 00:08:02.594 }, 00:08:02.594 { 00:08:02.594 "name": "BaseBdev2", 00:08:02.594 "uuid": "7c882355-ff2a-575a-bc8c-7a03da07e38a", 00:08:02.594 "is_configured": true, 00:08:02.594 "data_offset": 2048, 00:08:02.594 "data_size": 63488 00:08:02.594 } 00:08:02.594 ] 00:08:02.594 }' 00:08:02.594 13:17:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:02.594 13:17:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.164 13:17:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:03.164 13:17:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.164 13:17:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.164 [2024-11-17 13:17:52.111514] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:03.164 [2024-11-17 13:17:52.111622] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:03.164 [2024-11-17 13:17:52.114486] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:03.164 [2024-11-17 13:17:52.114574] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:03.164 [2024-11-17 13:17:52.114672] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:03.164 [2024-11-17 13:17:52.114763] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:03.164 { 00:08:03.164 "results": [ 00:08:03.164 { 00:08:03.164 "job": "raid_bdev1", 00:08:03.164 "core_mask": "0x1", 00:08:03.164 "workload": "randrw", 00:08:03.164 "percentage": 50, 00:08:03.164 "status": "finished", 00:08:03.164 "queue_depth": 1, 00:08:03.164 "io_size": 131072, 00:08:03.164 "runtime": 1.372691, 00:08:03.164 "iops": 17900.60545308449, 00:08:03.164 "mibps": 2237.575681635561, 00:08:03.164 "io_failed": 0, 00:08:03.164 "io_timeout": 0, 00:08:03.164 "avg_latency_us": 53.32561434287758, 00:08:03.164 "min_latency_us": 22.91703056768559, 00:08:03.164 "max_latency_us": 1395.1441048034935 00:08:03.164 } 00:08:03.164 ], 00:08:03.164 "core_count": 1 00:08:03.164 } 00:08:03.164 13:17:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.164 13:17:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63480 00:08:03.164 13:17:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 63480 ']' 00:08:03.164 13:17:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 63480 00:08:03.164 13:17:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:08:03.164 13:17:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:03.164 13:17:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63480 00:08:03.164 13:17:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:03.164 13:17:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:03.164 13:17:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63480' 00:08:03.164 killing process with pid 63480 00:08:03.164 13:17:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 63480 00:08:03.164 [2024-11-17 13:17:52.146944] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:03.164 13:17:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 63480 00:08:03.164 [2024-11-17 13:17:52.282597] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:04.545 13:17:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:04.545 13:17:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.kcIgWEllKi 00:08:04.545 13:17:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:04.545 13:17:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:08:04.545 13:17:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:08:04.545 ************************************ 00:08:04.545 END TEST raid_read_error_test 00:08:04.545 ************************************ 00:08:04.545 13:17:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:04.545 13:17:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:04.545 13:17:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:08:04.545 00:08:04.545 real 0m4.358s 00:08:04.545 user 0m5.255s 00:08:04.545 sys 0m0.548s 00:08:04.545 13:17:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:04.545 13:17:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.545 13:17:53 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 2 write 00:08:04.545 13:17:53 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:04.545 13:17:53 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:04.545 13:17:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:04.545 ************************************ 00:08:04.545 START TEST raid_write_error_test 00:08:04.545 ************************************ 00:08:04.545 13:17:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 2 write 00:08:04.545 13:17:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:08:04.545 13:17:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:04.545 13:17:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:08:04.545 13:17:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:04.545 13:17:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:04.545 13:17:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:04.545 13:17:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:04.545 13:17:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:04.546 13:17:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:04.546 13:17:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:04.546 13:17:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:04.546 13:17:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:04.546 13:17:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:04.546 13:17:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:04.546 13:17:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:04.546 13:17:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:04.546 13:17:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:04.546 13:17:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:04.546 13:17:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:08:04.546 13:17:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:08:04.546 13:17:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:04.546 13:17:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.2BgAEegN79 00:08:04.546 13:17:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63624 00:08:04.546 13:17:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63624 00:08:04.546 13:17:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:04.546 13:17:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 63624 ']' 00:08:04.546 13:17:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:04.546 13:17:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:04.546 13:17:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:04.546 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:04.546 13:17:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:04.546 13:17:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.546 [2024-11-17 13:17:53.607975] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:08:04.546 [2024-11-17 13:17:53.608169] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63624 ] 00:08:04.805 [2024-11-17 13:17:53.772908] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:04.805 [2024-11-17 13:17:53.886042] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:05.065 [2024-11-17 13:17:54.084995] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:05.065 [2024-11-17 13:17:54.085154] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:05.325 13:17:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:05.325 13:17:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:05.325 13:17:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:05.325 13:17:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:05.325 13:17:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.325 13:17:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.325 BaseBdev1_malloc 00:08:05.325 13:17:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.325 13:17:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:05.325 13:17:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.325 13:17:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.325 true 00:08:05.325 13:17:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.325 13:17:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:05.325 13:17:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.325 13:17:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.325 [2024-11-17 13:17:54.502186] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:05.325 [2024-11-17 13:17:54.502318] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:05.325 [2024-11-17 13:17:54.502347] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:05.325 [2024-11-17 13:17:54.502360] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:05.326 [2024-11-17 13:17:54.504588] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:05.326 [2024-11-17 13:17:54.504629] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:05.326 BaseBdev1 00:08:05.326 13:17:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.326 13:17:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:05.326 13:17:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:05.326 13:17:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.326 13:17:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.585 BaseBdev2_malloc 00:08:05.585 13:17:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.585 13:17:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:05.585 13:17:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.585 13:17:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.585 true 00:08:05.585 13:17:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.585 13:17:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:05.585 13:17:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.585 13:17:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.585 [2024-11-17 13:17:54.567627] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:05.585 [2024-11-17 13:17:54.567685] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:05.585 [2024-11-17 13:17:54.567704] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:05.585 [2024-11-17 13:17:54.567714] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:05.585 [2024-11-17 13:17:54.569906] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:05.585 [2024-11-17 13:17:54.569987] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:05.585 BaseBdev2 00:08:05.585 13:17:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.586 13:17:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:05.586 13:17:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.586 13:17:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.586 [2024-11-17 13:17:54.579666] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:05.586 [2024-11-17 13:17:54.581570] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:05.586 [2024-11-17 13:17:54.581782] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:05.586 [2024-11-17 13:17:54.581798] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:05.586 [2024-11-17 13:17:54.582029] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:05.586 [2024-11-17 13:17:54.582228] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:05.586 [2024-11-17 13:17:54.582239] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:05.586 [2024-11-17 13:17:54.582392] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:05.586 13:17:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.586 13:17:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:05.586 13:17:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:05.586 13:17:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:05.586 13:17:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:05.586 13:17:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:05.586 13:17:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:05.586 13:17:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:05.586 13:17:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:05.586 13:17:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:05.586 13:17:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:05.586 13:17:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:05.586 13:17:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:05.586 13:17:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.586 13:17:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.586 13:17:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.586 13:17:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:05.586 "name": "raid_bdev1", 00:08:05.586 "uuid": "43cd5d38-a036-45d4-b011-a3f50b6da63f", 00:08:05.586 "strip_size_kb": 0, 00:08:05.586 "state": "online", 00:08:05.586 "raid_level": "raid1", 00:08:05.586 "superblock": true, 00:08:05.586 "num_base_bdevs": 2, 00:08:05.586 "num_base_bdevs_discovered": 2, 00:08:05.586 "num_base_bdevs_operational": 2, 00:08:05.586 "base_bdevs_list": [ 00:08:05.586 { 00:08:05.586 "name": "BaseBdev1", 00:08:05.586 "uuid": "b1b348d4-1331-5957-9b54-b5f3f243d4f9", 00:08:05.586 "is_configured": true, 00:08:05.586 "data_offset": 2048, 00:08:05.586 "data_size": 63488 00:08:05.586 }, 00:08:05.586 { 00:08:05.586 "name": "BaseBdev2", 00:08:05.586 "uuid": "9d4e2cca-4af0-5b2b-8634-3876f6df25f6", 00:08:05.586 "is_configured": true, 00:08:05.586 "data_offset": 2048, 00:08:05.586 "data_size": 63488 00:08:05.586 } 00:08:05.586 ] 00:08:05.586 }' 00:08:05.586 13:17:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:05.586 13:17:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.845 13:17:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:05.845 13:17:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:06.105 [2024-11-17 13:17:55.116371] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:08:07.046 13:17:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:08:07.046 13:17:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.046 13:17:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.046 [2024-11-17 13:17:56.032433] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:08:07.046 [2024-11-17 13:17:56.032632] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:07.046 [2024-11-17 13:17:56.032878] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000063c0 00:08:07.046 13:17:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.046 13:17:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:07.046 13:17:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:08:07.046 13:17:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:08:07.046 13:17:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=1 00:08:07.046 13:17:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:07.046 13:17:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:07.046 13:17:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:07.046 13:17:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:07.046 13:17:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:07.046 13:17:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:07.046 13:17:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:07.046 13:17:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:07.046 13:17:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:07.046 13:17:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:07.046 13:17:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:07.046 13:17:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:07.046 13:17:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.046 13:17:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.046 13:17:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.046 13:17:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:07.046 "name": "raid_bdev1", 00:08:07.046 "uuid": "43cd5d38-a036-45d4-b011-a3f50b6da63f", 00:08:07.046 "strip_size_kb": 0, 00:08:07.046 "state": "online", 00:08:07.046 "raid_level": "raid1", 00:08:07.046 "superblock": true, 00:08:07.046 "num_base_bdevs": 2, 00:08:07.046 "num_base_bdevs_discovered": 1, 00:08:07.046 "num_base_bdevs_operational": 1, 00:08:07.046 "base_bdevs_list": [ 00:08:07.046 { 00:08:07.046 "name": null, 00:08:07.046 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:07.046 "is_configured": false, 00:08:07.046 "data_offset": 0, 00:08:07.046 "data_size": 63488 00:08:07.046 }, 00:08:07.046 { 00:08:07.046 "name": "BaseBdev2", 00:08:07.046 "uuid": "9d4e2cca-4af0-5b2b-8634-3876f6df25f6", 00:08:07.046 "is_configured": true, 00:08:07.046 "data_offset": 2048, 00:08:07.046 "data_size": 63488 00:08:07.046 } 00:08:07.046 ] 00:08:07.046 }' 00:08:07.046 13:17:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:07.046 13:17:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.305 13:17:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:07.306 13:17:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.306 13:17:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.306 [2024-11-17 13:17:56.397071] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:07.306 [2024-11-17 13:17:56.397184] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:07.306 [2024-11-17 13:17:56.399837] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:07.306 [2024-11-17 13:17:56.399872] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:07.306 [2024-11-17 13:17:56.399926] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:07.306 [2024-11-17 13:17:56.399938] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:07.306 { 00:08:07.306 "results": [ 00:08:07.306 { 00:08:07.306 "job": "raid_bdev1", 00:08:07.306 "core_mask": "0x1", 00:08:07.306 "workload": "randrw", 00:08:07.306 "percentage": 50, 00:08:07.306 "status": "finished", 00:08:07.306 "queue_depth": 1, 00:08:07.306 "io_size": 131072, 00:08:07.306 "runtime": 1.280224, 00:08:07.306 "iops": 20147.25548029095, 00:08:07.306 "mibps": 2518.406935036369, 00:08:07.306 "io_failed": 0, 00:08:07.306 "io_timeout": 0, 00:08:07.306 "avg_latency_us": 46.97825810699461, 00:08:07.306 "min_latency_us": 22.805240174672488, 00:08:07.306 "max_latency_us": 1345.0620087336245 00:08:07.306 } 00:08:07.306 ], 00:08:07.306 "core_count": 1 00:08:07.306 } 00:08:07.306 13:17:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.306 13:17:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63624 00:08:07.306 13:17:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 63624 ']' 00:08:07.306 13:17:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 63624 00:08:07.306 13:17:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:08:07.306 13:17:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:07.306 13:17:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63624 00:08:07.306 13:17:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:07.306 13:17:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:07.306 13:17:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63624' 00:08:07.306 killing process with pid 63624 00:08:07.306 13:17:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 63624 00:08:07.306 [2024-11-17 13:17:56.444256] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:07.306 13:17:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 63624 00:08:07.565 [2024-11-17 13:17:56.578626] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:08.948 13:17:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:08.948 13:17:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.2BgAEegN79 00:08:08.948 13:17:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:08.948 13:17:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:08:08.948 13:17:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:08:08.948 13:17:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:08.948 13:17:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:08.948 13:17:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:08:08.948 00:08:08.948 real 0m4.240s 00:08:08.948 user 0m5.018s 00:08:08.948 sys 0m0.529s 00:08:08.948 13:17:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:08.948 13:17:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.948 ************************************ 00:08:08.948 END TEST raid_write_error_test 00:08:08.948 ************************************ 00:08:08.948 13:17:57 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:08:08.948 13:17:57 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:08.948 13:17:57 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:08:08.948 13:17:57 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:08.948 13:17:57 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:08.948 13:17:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:08.948 ************************************ 00:08:08.948 START TEST raid_state_function_test 00:08:08.948 ************************************ 00:08:08.948 13:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 3 false 00:08:08.948 13:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:08:08.948 13:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:08.948 13:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:08.948 13:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:08.948 13:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:08.948 13:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:08.948 13:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:08.948 13:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:08.948 13:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:08.948 13:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:08.948 13:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:08.948 13:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:08.948 13:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:08.948 13:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:08.948 13:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:08.948 13:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:08.948 13:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:08.948 13:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:08.948 13:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:08.948 13:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:08.948 13:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:08.948 13:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:08:08.948 13:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:08.948 13:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:08.948 13:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:08.948 13:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:08.948 Process raid pid: 63768 00:08:08.948 13:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=63768 00:08:08.948 13:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 63768' 00:08:08.948 13:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:08.948 13:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 63768 00:08:08.948 13:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 63768 ']' 00:08:08.948 13:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:08.948 13:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:08.948 13:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:08.948 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:08.948 13:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:08.948 13:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.948 [2024-11-17 13:17:57.908806] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:08:08.948 [2024-11-17 13:17:57.909009] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:08.948 [2024-11-17 13:17:58.082302] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.215 [2024-11-17 13:17:58.199001] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.215 [2024-11-17 13:17:58.403671] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:09.215 [2024-11-17 13:17:58.403717] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:09.785 13:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:09.785 13:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:08:09.785 13:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:09.785 13:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.785 13:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.785 [2024-11-17 13:17:58.743386] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:09.785 [2024-11-17 13:17:58.743442] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:09.785 [2024-11-17 13:17:58.743453] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:09.785 [2024-11-17 13:17:58.743462] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:09.785 [2024-11-17 13:17:58.743468] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:09.785 [2024-11-17 13:17:58.743477] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:09.785 13:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.785 13:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:09.785 13:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:09.785 13:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:09.785 13:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:09.785 13:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:09.785 13:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:09.785 13:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:09.785 13:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:09.785 13:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:09.785 13:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:09.785 13:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:09.785 13:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:09.785 13:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.785 13:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.785 13:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.785 13:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:09.785 "name": "Existed_Raid", 00:08:09.785 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:09.785 "strip_size_kb": 64, 00:08:09.785 "state": "configuring", 00:08:09.785 "raid_level": "raid0", 00:08:09.785 "superblock": false, 00:08:09.785 "num_base_bdevs": 3, 00:08:09.785 "num_base_bdevs_discovered": 0, 00:08:09.785 "num_base_bdevs_operational": 3, 00:08:09.785 "base_bdevs_list": [ 00:08:09.785 { 00:08:09.785 "name": "BaseBdev1", 00:08:09.785 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:09.785 "is_configured": false, 00:08:09.785 "data_offset": 0, 00:08:09.785 "data_size": 0 00:08:09.785 }, 00:08:09.785 { 00:08:09.785 "name": "BaseBdev2", 00:08:09.785 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:09.785 "is_configured": false, 00:08:09.785 "data_offset": 0, 00:08:09.785 "data_size": 0 00:08:09.785 }, 00:08:09.785 { 00:08:09.785 "name": "BaseBdev3", 00:08:09.785 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:09.785 "is_configured": false, 00:08:09.785 "data_offset": 0, 00:08:09.785 "data_size": 0 00:08:09.785 } 00:08:09.785 ] 00:08:09.785 }' 00:08:09.785 13:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:09.785 13:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.045 13:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:10.045 13:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.045 13:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.045 [2024-11-17 13:17:59.162631] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:10.045 [2024-11-17 13:17:59.162738] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:10.045 13:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.045 13:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:10.045 13:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.045 13:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.045 [2024-11-17 13:17:59.174591] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:10.045 [2024-11-17 13:17:59.174637] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:10.045 [2024-11-17 13:17:59.174647] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:10.045 [2024-11-17 13:17:59.174656] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:10.045 [2024-11-17 13:17:59.174661] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:10.045 [2024-11-17 13:17:59.174669] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:10.045 13:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.045 13:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:10.045 13:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.045 13:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.045 [2024-11-17 13:17:59.221487] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:10.045 BaseBdev1 00:08:10.045 13:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.045 13:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:10.045 13:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:10.045 13:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:10.045 13:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:10.045 13:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:10.045 13:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:10.045 13:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:10.045 13:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.045 13:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.045 13:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.045 13:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:10.045 13:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.045 13:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.045 [ 00:08:10.045 { 00:08:10.045 "name": "BaseBdev1", 00:08:10.045 "aliases": [ 00:08:10.045 "b4a9bd3d-892f-42cb-9ff9-81de06e30d7e" 00:08:10.045 ], 00:08:10.045 "product_name": "Malloc disk", 00:08:10.045 "block_size": 512, 00:08:10.045 "num_blocks": 65536, 00:08:10.045 "uuid": "b4a9bd3d-892f-42cb-9ff9-81de06e30d7e", 00:08:10.045 "assigned_rate_limits": { 00:08:10.045 "rw_ios_per_sec": 0, 00:08:10.045 "rw_mbytes_per_sec": 0, 00:08:10.045 "r_mbytes_per_sec": 0, 00:08:10.045 "w_mbytes_per_sec": 0 00:08:10.045 }, 00:08:10.045 "claimed": true, 00:08:10.045 "claim_type": "exclusive_write", 00:08:10.045 "zoned": false, 00:08:10.045 "supported_io_types": { 00:08:10.045 "read": true, 00:08:10.045 "write": true, 00:08:10.045 "unmap": true, 00:08:10.045 "flush": true, 00:08:10.045 "reset": true, 00:08:10.045 "nvme_admin": false, 00:08:10.045 "nvme_io": false, 00:08:10.045 "nvme_io_md": false, 00:08:10.045 "write_zeroes": true, 00:08:10.045 "zcopy": true, 00:08:10.045 "get_zone_info": false, 00:08:10.045 "zone_management": false, 00:08:10.045 "zone_append": false, 00:08:10.045 "compare": false, 00:08:10.045 "compare_and_write": false, 00:08:10.045 "abort": true, 00:08:10.045 "seek_hole": false, 00:08:10.045 "seek_data": false, 00:08:10.045 "copy": true, 00:08:10.045 "nvme_iov_md": false 00:08:10.045 }, 00:08:10.045 "memory_domains": [ 00:08:10.045 { 00:08:10.045 "dma_device_id": "system", 00:08:10.045 "dma_device_type": 1 00:08:10.045 }, 00:08:10.045 { 00:08:10.045 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:10.045 "dma_device_type": 2 00:08:10.045 } 00:08:10.045 ], 00:08:10.045 "driver_specific": {} 00:08:10.045 } 00:08:10.045 ] 00:08:10.045 13:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.045 13:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:10.045 13:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:10.045 13:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:10.045 13:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:10.045 13:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:10.045 13:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:10.045 13:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:10.045 13:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:10.045 13:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:10.045 13:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:10.045 13:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:10.045 13:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:10.045 13:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:10.045 13:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.045 13:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.304 13:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.304 13:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:10.304 "name": "Existed_Raid", 00:08:10.304 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:10.304 "strip_size_kb": 64, 00:08:10.304 "state": "configuring", 00:08:10.304 "raid_level": "raid0", 00:08:10.304 "superblock": false, 00:08:10.304 "num_base_bdevs": 3, 00:08:10.304 "num_base_bdevs_discovered": 1, 00:08:10.304 "num_base_bdevs_operational": 3, 00:08:10.304 "base_bdevs_list": [ 00:08:10.304 { 00:08:10.304 "name": "BaseBdev1", 00:08:10.304 "uuid": "b4a9bd3d-892f-42cb-9ff9-81de06e30d7e", 00:08:10.304 "is_configured": true, 00:08:10.304 "data_offset": 0, 00:08:10.304 "data_size": 65536 00:08:10.304 }, 00:08:10.304 { 00:08:10.304 "name": "BaseBdev2", 00:08:10.304 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:10.304 "is_configured": false, 00:08:10.304 "data_offset": 0, 00:08:10.304 "data_size": 0 00:08:10.304 }, 00:08:10.304 { 00:08:10.304 "name": "BaseBdev3", 00:08:10.304 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:10.304 "is_configured": false, 00:08:10.304 "data_offset": 0, 00:08:10.304 "data_size": 0 00:08:10.304 } 00:08:10.304 ] 00:08:10.304 }' 00:08:10.304 13:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:10.304 13:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.563 13:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:10.564 13:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.564 13:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.564 [2024-11-17 13:17:59.656801] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:10.564 [2024-11-17 13:17:59.656927] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:10.564 13:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.564 13:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:10.564 13:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.564 13:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.564 [2024-11-17 13:17:59.664826] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:10.564 [2024-11-17 13:17:59.666773] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:10.564 [2024-11-17 13:17:59.666867] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:10.564 [2024-11-17 13:17:59.666897] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:10.564 [2024-11-17 13:17:59.666921] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:10.564 13:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.564 13:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:10.564 13:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:10.564 13:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:10.564 13:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:10.564 13:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:10.564 13:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:10.564 13:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:10.564 13:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:10.564 13:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:10.564 13:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:10.564 13:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:10.564 13:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:10.564 13:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:10.564 13:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.564 13:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:10.564 13:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.564 13:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.564 13:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:10.564 "name": "Existed_Raid", 00:08:10.564 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:10.564 "strip_size_kb": 64, 00:08:10.564 "state": "configuring", 00:08:10.564 "raid_level": "raid0", 00:08:10.564 "superblock": false, 00:08:10.564 "num_base_bdevs": 3, 00:08:10.564 "num_base_bdevs_discovered": 1, 00:08:10.564 "num_base_bdevs_operational": 3, 00:08:10.564 "base_bdevs_list": [ 00:08:10.564 { 00:08:10.564 "name": "BaseBdev1", 00:08:10.564 "uuid": "b4a9bd3d-892f-42cb-9ff9-81de06e30d7e", 00:08:10.564 "is_configured": true, 00:08:10.564 "data_offset": 0, 00:08:10.564 "data_size": 65536 00:08:10.564 }, 00:08:10.564 { 00:08:10.564 "name": "BaseBdev2", 00:08:10.564 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:10.564 "is_configured": false, 00:08:10.564 "data_offset": 0, 00:08:10.564 "data_size": 0 00:08:10.564 }, 00:08:10.564 { 00:08:10.564 "name": "BaseBdev3", 00:08:10.564 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:10.564 "is_configured": false, 00:08:10.564 "data_offset": 0, 00:08:10.564 "data_size": 0 00:08:10.564 } 00:08:10.564 ] 00:08:10.564 }' 00:08:10.564 13:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:10.564 13:17:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.134 13:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:11.134 13:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.134 13:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.134 [2024-11-17 13:18:00.166930] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:11.134 BaseBdev2 00:08:11.134 13:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.134 13:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:11.134 13:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:11.134 13:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:11.134 13:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:11.134 13:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:11.134 13:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:11.134 13:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:11.134 13:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.134 13:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.134 13:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.134 13:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:11.134 13:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.134 13:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.134 [ 00:08:11.134 { 00:08:11.134 "name": "BaseBdev2", 00:08:11.134 "aliases": [ 00:08:11.134 "944d00c6-0870-45d0-9d9a-65ca9730d58f" 00:08:11.134 ], 00:08:11.134 "product_name": "Malloc disk", 00:08:11.134 "block_size": 512, 00:08:11.134 "num_blocks": 65536, 00:08:11.134 "uuid": "944d00c6-0870-45d0-9d9a-65ca9730d58f", 00:08:11.134 "assigned_rate_limits": { 00:08:11.134 "rw_ios_per_sec": 0, 00:08:11.134 "rw_mbytes_per_sec": 0, 00:08:11.134 "r_mbytes_per_sec": 0, 00:08:11.134 "w_mbytes_per_sec": 0 00:08:11.134 }, 00:08:11.134 "claimed": true, 00:08:11.134 "claim_type": "exclusive_write", 00:08:11.134 "zoned": false, 00:08:11.134 "supported_io_types": { 00:08:11.134 "read": true, 00:08:11.134 "write": true, 00:08:11.134 "unmap": true, 00:08:11.134 "flush": true, 00:08:11.134 "reset": true, 00:08:11.134 "nvme_admin": false, 00:08:11.134 "nvme_io": false, 00:08:11.134 "nvme_io_md": false, 00:08:11.134 "write_zeroes": true, 00:08:11.134 "zcopy": true, 00:08:11.134 "get_zone_info": false, 00:08:11.134 "zone_management": false, 00:08:11.134 "zone_append": false, 00:08:11.134 "compare": false, 00:08:11.134 "compare_and_write": false, 00:08:11.134 "abort": true, 00:08:11.134 "seek_hole": false, 00:08:11.134 "seek_data": false, 00:08:11.134 "copy": true, 00:08:11.134 "nvme_iov_md": false 00:08:11.134 }, 00:08:11.134 "memory_domains": [ 00:08:11.134 { 00:08:11.134 "dma_device_id": "system", 00:08:11.134 "dma_device_type": 1 00:08:11.134 }, 00:08:11.134 { 00:08:11.134 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:11.134 "dma_device_type": 2 00:08:11.134 } 00:08:11.134 ], 00:08:11.134 "driver_specific": {} 00:08:11.134 } 00:08:11.134 ] 00:08:11.134 13:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.134 13:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:11.134 13:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:11.134 13:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:11.134 13:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:11.134 13:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:11.134 13:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:11.134 13:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:11.134 13:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:11.134 13:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:11.134 13:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:11.134 13:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:11.134 13:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:11.134 13:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:11.134 13:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:11.134 13:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:11.134 13:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.134 13:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.134 13:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.134 13:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:11.134 "name": "Existed_Raid", 00:08:11.134 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:11.134 "strip_size_kb": 64, 00:08:11.134 "state": "configuring", 00:08:11.134 "raid_level": "raid0", 00:08:11.134 "superblock": false, 00:08:11.134 "num_base_bdevs": 3, 00:08:11.134 "num_base_bdevs_discovered": 2, 00:08:11.134 "num_base_bdevs_operational": 3, 00:08:11.134 "base_bdevs_list": [ 00:08:11.134 { 00:08:11.134 "name": "BaseBdev1", 00:08:11.134 "uuid": "b4a9bd3d-892f-42cb-9ff9-81de06e30d7e", 00:08:11.134 "is_configured": true, 00:08:11.134 "data_offset": 0, 00:08:11.134 "data_size": 65536 00:08:11.134 }, 00:08:11.134 { 00:08:11.134 "name": "BaseBdev2", 00:08:11.134 "uuid": "944d00c6-0870-45d0-9d9a-65ca9730d58f", 00:08:11.134 "is_configured": true, 00:08:11.134 "data_offset": 0, 00:08:11.134 "data_size": 65536 00:08:11.134 }, 00:08:11.134 { 00:08:11.134 "name": "BaseBdev3", 00:08:11.134 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:11.134 "is_configured": false, 00:08:11.134 "data_offset": 0, 00:08:11.134 "data_size": 0 00:08:11.134 } 00:08:11.134 ] 00:08:11.134 }' 00:08:11.134 13:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:11.134 13:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.394 13:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:11.394 13:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.394 13:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.654 [2024-11-17 13:18:00.656523] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:11.654 [2024-11-17 13:18:00.656644] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:11.654 [2024-11-17 13:18:00.656665] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:11.654 [2024-11-17 13:18:00.656966] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:11.654 [2024-11-17 13:18:00.657119] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:11.654 [2024-11-17 13:18:00.657128] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:11.654 [2024-11-17 13:18:00.657419] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:11.654 BaseBdev3 00:08:11.654 13:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.654 13:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:11.654 13:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:11.654 13:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:11.654 13:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:11.654 13:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:11.654 13:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:11.654 13:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:11.654 13:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.654 13:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.654 13:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.654 13:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:11.654 13:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.654 13:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.654 [ 00:08:11.654 { 00:08:11.654 "name": "BaseBdev3", 00:08:11.654 "aliases": [ 00:08:11.654 "61c56609-5822-4f8b-b2fd-71f5c0193306" 00:08:11.654 ], 00:08:11.654 "product_name": "Malloc disk", 00:08:11.654 "block_size": 512, 00:08:11.654 "num_blocks": 65536, 00:08:11.654 "uuid": "61c56609-5822-4f8b-b2fd-71f5c0193306", 00:08:11.654 "assigned_rate_limits": { 00:08:11.654 "rw_ios_per_sec": 0, 00:08:11.654 "rw_mbytes_per_sec": 0, 00:08:11.654 "r_mbytes_per_sec": 0, 00:08:11.654 "w_mbytes_per_sec": 0 00:08:11.654 }, 00:08:11.654 "claimed": true, 00:08:11.654 "claim_type": "exclusive_write", 00:08:11.654 "zoned": false, 00:08:11.654 "supported_io_types": { 00:08:11.654 "read": true, 00:08:11.654 "write": true, 00:08:11.654 "unmap": true, 00:08:11.654 "flush": true, 00:08:11.654 "reset": true, 00:08:11.654 "nvme_admin": false, 00:08:11.654 "nvme_io": false, 00:08:11.654 "nvme_io_md": false, 00:08:11.654 "write_zeroes": true, 00:08:11.654 "zcopy": true, 00:08:11.654 "get_zone_info": false, 00:08:11.654 "zone_management": false, 00:08:11.654 "zone_append": false, 00:08:11.654 "compare": false, 00:08:11.654 "compare_and_write": false, 00:08:11.654 "abort": true, 00:08:11.654 "seek_hole": false, 00:08:11.654 "seek_data": false, 00:08:11.654 "copy": true, 00:08:11.654 "nvme_iov_md": false 00:08:11.654 }, 00:08:11.654 "memory_domains": [ 00:08:11.654 { 00:08:11.654 "dma_device_id": "system", 00:08:11.654 "dma_device_type": 1 00:08:11.654 }, 00:08:11.654 { 00:08:11.654 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:11.654 "dma_device_type": 2 00:08:11.654 } 00:08:11.654 ], 00:08:11.654 "driver_specific": {} 00:08:11.654 } 00:08:11.654 ] 00:08:11.654 13:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.654 13:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:11.654 13:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:11.654 13:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:11.654 13:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:11.654 13:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:11.654 13:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:11.654 13:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:11.654 13:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:11.654 13:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:11.654 13:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:11.654 13:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:11.654 13:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:11.654 13:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:11.654 13:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:11.655 13:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:11.655 13:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.655 13:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.655 13:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.655 13:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:11.655 "name": "Existed_Raid", 00:08:11.655 "uuid": "2f9cc9b6-d791-42f5-be57-aaffe891842d", 00:08:11.655 "strip_size_kb": 64, 00:08:11.655 "state": "online", 00:08:11.655 "raid_level": "raid0", 00:08:11.655 "superblock": false, 00:08:11.655 "num_base_bdevs": 3, 00:08:11.655 "num_base_bdevs_discovered": 3, 00:08:11.655 "num_base_bdevs_operational": 3, 00:08:11.655 "base_bdevs_list": [ 00:08:11.655 { 00:08:11.655 "name": "BaseBdev1", 00:08:11.655 "uuid": "b4a9bd3d-892f-42cb-9ff9-81de06e30d7e", 00:08:11.655 "is_configured": true, 00:08:11.655 "data_offset": 0, 00:08:11.655 "data_size": 65536 00:08:11.655 }, 00:08:11.655 { 00:08:11.655 "name": "BaseBdev2", 00:08:11.655 "uuid": "944d00c6-0870-45d0-9d9a-65ca9730d58f", 00:08:11.655 "is_configured": true, 00:08:11.655 "data_offset": 0, 00:08:11.655 "data_size": 65536 00:08:11.655 }, 00:08:11.655 { 00:08:11.655 "name": "BaseBdev3", 00:08:11.655 "uuid": "61c56609-5822-4f8b-b2fd-71f5c0193306", 00:08:11.655 "is_configured": true, 00:08:11.655 "data_offset": 0, 00:08:11.655 "data_size": 65536 00:08:11.655 } 00:08:11.655 ] 00:08:11.655 }' 00:08:11.655 13:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:11.655 13:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.914 13:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:11.914 13:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:11.914 13:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:11.914 13:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:11.914 13:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:11.914 13:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:11.914 13:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:11.914 13:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.914 13:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.914 13:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:11.914 [2024-11-17 13:18:01.108151] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:11.914 13:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.175 13:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:12.175 "name": "Existed_Raid", 00:08:12.175 "aliases": [ 00:08:12.175 "2f9cc9b6-d791-42f5-be57-aaffe891842d" 00:08:12.175 ], 00:08:12.175 "product_name": "Raid Volume", 00:08:12.175 "block_size": 512, 00:08:12.175 "num_blocks": 196608, 00:08:12.175 "uuid": "2f9cc9b6-d791-42f5-be57-aaffe891842d", 00:08:12.175 "assigned_rate_limits": { 00:08:12.175 "rw_ios_per_sec": 0, 00:08:12.175 "rw_mbytes_per_sec": 0, 00:08:12.175 "r_mbytes_per_sec": 0, 00:08:12.175 "w_mbytes_per_sec": 0 00:08:12.175 }, 00:08:12.175 "claimed": false, 00:08:12.175 "zoned": false, 00:08:12.175 "supported_io_types": { 00:08:12.175 "read": true, 00:08:12.175 "write": true, 00:08:12.175 "unmap": true, 00:08:12.175 "flush": true, 00:08:12.175 "reset": true, 00:08:12.175 "nvme_admin": false, 00:08:12.175 "nvme_io": false, 00:08:12.175 "nvme_io_md": false, 00:08:12.175 "write_zeroes": true, 00:08:12.175 "zcopy": false, 00:08:12.175 "get_zone_info": false, 00:08:12.175 "zone_management": false, 00:08:12.175 "zone_append": false, 00:08:12.175 "compare": false, 00:08:12.175 "compare_and_write": false, 00:08:12.175 "abort": false, 00:08:12.175 "seek_hole": false, 00:08:12.175 "seek_data": false, 00:08:12.175 "copy": false, 00:08:12.175 "nvme_iov_md": false 00:08:12.175 }, 00:08:12.175 "memory_domains": [ 00:08:12.175 { 00:08:12.175 "dma_device_id": "system", 00:08:12.175 "dma_device_type": 1 00:08:12.175 }, 00:08:12.175 { 00:08:12.175 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:12.175 "dma_device_type": 2 00:08:12.175 }, 00:08:12.175 { 00:08:12.175 "dma_device_id": "system", 00:08:12.175 "dma_device_type": 1 00:08:12.175 }, 00:08:12.175 { 00:08:12.175 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:12.175 "dma_device_type": 2 00:08:12.175 }, 00:08:12.175 { 00:08:12.175 "dma_device_id": "system", 00:08:12.175 "dma_device_type": 1 00:08:12.175 }, 00:08:12.175 { 00:08:12.175 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:12.175 "dma_device_type": 2 00:08:12.175 } 00:08:12.175 ], 00:08:12.175 "driver_specific": { 00:08:12.175 "raid": { 00:08:12.175 "uuid": "2f9cc9b6-d791-42f5-be57-aaffe891842d", 00:08:12.175 "strip_size_kb": 64, 00:08:12.175 "state": "online", 00:08:12.175 "raid_level": "raid0", 00:08:12.175 "superblock": false, 00:08:12.175 "num_base_bdevs": 3, 00:08:12.175 "num_base_bdevs_discovered": 3, 00:08:12.175 "num_base_bdevs_operational": 3, 00:08:12.175 "base_bdevs_list": [ 00:08:12.175 { 00:08:12.175 "name": "BaseBdev1", 00:08:12.175 "uuid": "b4a9bd3d-892f-42cb-9ff9-81de06e30d7e", 00:08:12.175 "is_configured": true, 00:08:12.175 "data_offset": 0, 00:08:12.175 "data_size": 65536 00:08:12.175 }, 00:08:12.175 { 00:08:12.175 "name": "BaseBdev2", 00:08:12.175 "uuid": "944d00c6-0870-45d0-9d9a-65ca9730d58f", 00:08:12.175 "is_configured": true, 00:08:12.175 "data_offset": 0, 00:08:12.175 "data_size": 65536 00:08:12.175 }, 00:08:12.175 { 00:08:12.175 "name": "BaseBdev3", 00:08:12.175 "uuid": "61c56609-5822-4f8b-b2fd-71f5c0193306", 00:08:12.175 "is_configured": true, 00:08:12.175 "data_offset": 0, 00:08:12.175 "data_size": 65536 00:08:12.175 } 00:08:12.175 ] 00:08:12.175 } 00:08:12.175 } 00:08:12.175 }' 00:08:12.175 13:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:12.175 13:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:12.175 BaseBdev2 00:08:12.175 BaseBdev3' 00:08:12.175 13:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:12.175 13:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:12.175 13:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:12.175 13:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:12.175 13:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.175 13:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.175 13:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:12.175 13:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.175 13:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:12.175 13:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:12.175 13:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:12.175 13:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:12.175 13:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:12.175 13:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.175 13:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.175 13:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.175 13:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:12.175 13:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:12.175 13:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:12.175 13:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:12.175 13:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.175 13:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.175 13:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:12.175 13:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.175 13:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:12.175 13:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:12.175 13:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:12.175 13:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.175 13:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.175 [2024-11-17 13:18:01.363418] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:12.175 [2024-11-17 13:18:01.363448] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:12.175 [2024-11-17 13:18:01.363501] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:12.435 13:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.435 13:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:12.435 13:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:08:12.435 13:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:12.435 13:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:12.435 13:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:12.435 13:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:08:12.435 13:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:12.435 13:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:12.435 13:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:12.435 13:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:12.435 13:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:12.435 13:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:12.435 13:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:12.436 13:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:12.436 13:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:12.436 13:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:12.436 13:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:12.436 13:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.436 13:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.436 13:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.436 13:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:12.436 "name": "Existed_Raid", 00:08:12.436 "uuid": "2f9cc9b6-d791-42f5-be57-aaffe891842d", 00:08:12.436 "strip_size_kb": 64, 00:08:12.436 "state": "offline", 00:08:12.436 "raid_level": "raid0", 00:08:12.436 "superblock": false, 00:08:12.436 "num_base_bdevs": 3, 00:08:12.436 "num_base_bdevs_discovered": 2, 00:08:12.436 "num_base_bdevs_operational": 2, 00:08:12.436 "base_bdevs_list": [ 00:08:12.436 { 00:08:12.436 "name": null, 00:08:12.436 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:12.436 "is_configured": false, 00:08:12.436 "data_offset": 0, 00:08:12.436 "data_size": 65536 00:08:12.436 }, 00:08:12.436 { 00:08:12.436 "name": "BaseBdev2", 00:08:12.436 "uuid": "944d00c6-0870-45d0-9d9a-65ca9730d58f", 00:08:12.436 "is_configured": true, 00:08:12.436 "data_offset": 0, 00:08:12.436 "data_size": 65536 00:08:12.436 }, 00:08:12.436 { 00:08:12.436 "name": "BaseBdev3", 00:08:12.436 "uuid": "61c56609-5822-4f8b-b2fd-71f5c0193306", 00:08:12.436 "is_configured": true, 00:08:12.436 "data_offset": 0, 00:08:12.436 "data_size": 65536 00:08:12.436 } 00:08:12.436 ] 00:08:12.436 }' 00:08:12.436 13:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:12.436 13:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.695 13:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:12.695 13:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:12.695 13:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:12.695 13:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.695 13:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.695 13:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:12.695 13:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.695 13:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:12.696 13:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:12.696 13:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:12.696 13:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.696 13:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.696 [2024-11-17 13:18:01.909392] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:12.955 13:18:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.955 13:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:12.955 13:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:12.955 13:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:12.955 13:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:12.955 13:18:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.955 13:18:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.955 13:18:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.955 13:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:12.955 13:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:12.955 13:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:12.955 13:18:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.955 13:18:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.955 [2024-11-17 13:18:02.060434] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:12.955 [2024-11-17 13:18:02.060550] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:12.955 13:18:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.955 13:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:12.955 13:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:12.955 13:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:12.955 13:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:12.955 13:18:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.955 13:18:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.955 13:18:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.216 13:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:13.216 13:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:13.216 13:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:13.216 13:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:13.216 13:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:13.216 13:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:13.216 13:18:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.216 13:18:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.216 BaseBdev2 00:08:13.216 13:18:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.216 13:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:13.216 13:18:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:13.216 13:18:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:13.216 13:18:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:13.216 13:18:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:13.216 13:18:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:13.216 13:18:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:13.216 13:18:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.216 13:18:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.216 13:18:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.216 13:18:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:13.216 13:18:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.216 13:18:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.216 [ 00:08:13.216 { 00:08:13.216 "name": "BaseBdev2", 00:08:13.216 "aliases": [ 00:08:13.216 "83301a9e-f1aa-4752-ad93-88a2da0a65c3" 00:08:13.216 ], 00:08:13.216 "product_name": "Malloc disk", 00:08:13.216 "block_size": 512, 00:08:13.216 "num_blocks": 65536, 00:08:13.216 "uuid": "83301a9e-f1aa-4752-ad93-88a2da0a65c3", 00:08:13.216 "assigned_rate_limits": { 00:08:13.216 "rw_ios_per_sec": 0, 00:08:13.216 "rw_mbytes_per_sec": 0, 00:08:13.216 "r_mbytes_per_sec": 0, 00:08:13.216 "w_mbytes_per_sec": 0 00:08:13.216 }, 00:08:13.216 "claimed": false, 00:08:13.216 "zoned": false, 00:08:13.216 "supported_io_types": { 00:08:13.216 "read": true, 00:08:13.216 "write": true, 00:08:13.216 "unmap": true, 00:08:13.216 "flush": true, 00:08:13.216 "reset": true, 00:08:13.216 "nvme_admin": false, 00:08:13.216 "nvme_io": false, 00:08:13.216 "nvme_io_md": false, 00:08:13.216 "write_zeroes": true, 00:08:13.216 "zcopy": true, 00:08:13.216 "get_zone_info": false, 00:08:13.216 "zone_management": false, 00:08:13.216 "zone_append": false, 00:08:13.216 "compare": false, 00:08:13.216 "compare_and_write": false, 00:08:13.216 "abort": true, 00:08:13.216 "seek_hole": false, 00:08:13.216 "seek_data": false, 00:08:13.216 "copy": true, 00:08:13.216 "nvme_iov_md": false 00:08:13.216 }, 00:08:13.216 "memory_domains": [ 00:08:13.216 { 00:08:13.216 "dma_device_id": "system", 00:08:13.216 "dma_device_type": 1 00:08:13.216 }, 00:08:13.216 { 00:08:13.216 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:13.216 "dma_device_type": 2 00:08:13.216 } 00:08:13.216 ], 00:08:13.216 "driver_specific": {} 00:08:13.216 } 00:08:13.216 ] 00:08:13.216 13:18:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.216 13:18:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:13.216 13:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:13.216 13:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:13.216 13:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:13.216 13:18:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.216 13:18:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.216 BaseBdev3 00:08:13.216 13:18:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.216 13:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:13.216 13:18:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:13.217 13:18:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:13.217 13:18:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:13.217 13:18:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:13.217 13:18:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:13.217 13:18:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:13.217 13:18:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.217 13:18:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.217 13:18:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.217 13:18:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:13.217 13:18:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.217 13:18:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.217 [ 00:08:13.217 { 00:08:13.217 "name": "BaseBdev3", 00:08:13.217 "aliases": [ 00:08:13.217 "ef53e44d-509d-49fd-97f1-b031363bf464" 00:08:13.217 ], 00:08:13.217 "product_name": "Malloc disk", 00:08:13.217 "block_size": 512, 00:08:13.217 "num_blocks": 65536, 00:08:13.217 "uuid": "ef53e44d-509d-49fd-97f1-b031363bf464", 00:08:13.217 "assigned_rate_limits": { 00:08:13.217 "rw_ios_per_sec": 0, 00:08:13.217 "rw_mbytes_per_sec": 0, 00:08:13.217 "r_mbytes_per_sec": 0, 00:08:13.217 "w_mbytes_per_sec": 0 00:08:13.217 }, 00:08:13.217 "claimed": false, 00:08:13.217 "zoned": false, 00:08:13.217 "supported_io_types": { 00:08:13.217 "read": true, 00:08:13.217 "write": true, 00:08:13.217 "unmap": true, 00:08:13.217 "flush": true, 00:08:13.217 "reset": true, 00:08:13.217 "nvme_admin": false, 00:08:13.217 "nvme_io": false, 00:08:13.217 "nvme_io_md": false, 00:08:13.217 "write_zeroes": true, 00:08:13.217 "zcopy": true, 00:08:13.217 "get_zone_info": false, 00:08:13.217 "zone_management": false, 00:08:13.217 "zone_append": false, 00:08:13.217 "compare": false, 00:08:13.217 "compare_and_write": false, 00:08:13.217 "abort": true, 00:08:13.217 "seek_hole": false, 00:08:13.217 "seek_data": false, 00:08:13.217 "copy": true, 00:08:13.217 "nvme_iov_md": false 00:08:13.217 }, 00:08:13.217 "memory_domains": [ 00:08:13.217 { 00:08:13.217 "dma_device_id": "system", 00:08:13.217 "dma_device_type": 1 00:08:13.217 }, 00:08:13.217 { 00:08:13.217 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:13.217 "dma_device_type": 2 00:08:13.217 } 00:08:13.217 ], 00:08:13.217 "driver_specific": {} 00:08:13.217 } 00:08:13.217 ] 00:08:13.217 13:18:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.217 13:18:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:13.217 13:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:13.217 13:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:13.217 13:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:13.217 13:18:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.217 13:18:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.217 [2024-11-17 13:18:02.361541] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:13.217 [2024-11-17 13:18:02.361632] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:13.217 [2024-11-17 13:18:02.361677] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:13.217 [2024-11-17 13:18:02.363469] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:13.217 13:18:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.217 13:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:13.217 13:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:13.217 13:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:13.217 13:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:13.217 13:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:13.217 13:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:13.217 13:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:13.217 13:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:13.217 13:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:13.217 13:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:13.217 13:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:13.217 13:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:13.217 13:18:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.217 13:18:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.217 13:18:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.217 13:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:13.217 "name": "Existed_Raid", 00:08:13.217 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:13.217 "strip_size_kb": 64, 00:08:13.217 "state": "configuring", 00:08:13.217 "raid_level": "raid0", 00:08:13.217 "superblock": false, 00:08:13.217 "num_base_bdevs": 3, 00:08:13.217 "num_base_bdevs_discovered": 2, 00:08:13.217 "num_base_bdevs_operational": 3, 00:08:13.217 "base_bdevs_list": [ 00:08:13.217 { 00:08:13.217 "name": "BaseBdev1", 00:08:13.217 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:13.217 "is_configured": false, 00:08:13.217 "data_offset": 0, 00:08:13.217 "data_size": 0 00:08:13.217 }, 00:08:13.217 { 00:08:13.217 "name": "BaseBdev2", 00:08:13.217 "uuid": "83301a9e-f1aa-4752-ad93-88a2da0a65c3", 00:08:13.217 "is_configured": true, 00:08:13.217 "data_offset": 0, 00:08:13.217 "data_size": 65536 00:08:13.217 }, 00:08:13.217 { 00:08:13.217 "name": "BaseBdev3", 00:08:13.217 "uuid": "ef53e44d-509d-49fd-97f1-b031363bf464", 00:08:13.217 "is_configured": true, 00:08:13.217 "data_offset": 0, 00:08:13.217 "data_size": 65536 00:08:13.217 } 00:08:13.217 ] 00:08:13.217 }' 00:08:13.217 13:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:13.217 13:18:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.787 13:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:13.787 13:18:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.787 13:18:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.787 [2024-11-17 13:18:02.748901] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:13.787 13:18:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.787 13:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:13.787 13:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:13.787 13:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:13.787 13:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:13.787 13:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:13.787 13:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:13.787 13:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:13.787 13:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:13.787 13:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:13.787 13:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:13.787 13:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:13.787 13:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:13.787 13:18:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.787 13:18:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.787 13:18:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.787 13:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:13.787 "name": "Existed_Raid", 00:08:13.787 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:13.787 "strip_size_kb": 64, 00:08:13.787 "state": "configuring", 00:08:13.787 "raid_level": "raid0", 00:08:13.787 "superblock": false, 00:08:13.787 "num_base_bdevs": 3, 00:08:13.787 "num_base_bdevs_discovered": 1, 00:08:13.787 "num_base_bdevs_operational": 3, 00:08:13.787 "base_bdevs_list": [ 00:08:13.787 { 00:08:13.787 "name": "BaseBdev1", 00:08:13.787 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:13.787 "is_configured": false, 00:08:13.787 "data_offset": 0, 00:08:13.787 "data_size": 0 00:08:13.787 }, 00:08:13.787 { 00:08:13.787 "name": null, 00:08:13.787 "uuid": "83301a9e-f1aa-4752-ad93-88a2da0a65c3", 00:08:13.787 "is_configured": false, 00:08:13.787 "data_offset": 0, 00:08:13.787 "data_size": 65536 00:08:13.787 }, 00:08:13.787 { 00:08:13.787 "name": "BaseBdev3", 00:08:13.787 "uuid": "ef53e44d-509d-49fd-97f1-b031363bf464", 00:08:13.787 "is_configured": true, 00:08:13.787 "data_offset": 0, 00:08:13.787 "data_size": 65536 00:08:13.787 } 00:08:13.787 ] 00:08:13.787 }' 00:08:13.787 13:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:13.787 13:18:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.047 13:18:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:14.047 13:18:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.047 13:18:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.047 13:18:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:14.047 13:18:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.047 13:18:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:14.047 13:18:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:14.047 13:18:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.047 13:18:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.047 [2024-11-17 13:18:03.237027] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:14.047 BaseBdev1 00:08:14.047 13:18:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.047 13:18:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:14.047 13:18:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:14.047 13:18:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:14.047 13:18:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:14.047 13:18:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:14.047 13:18:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:14.047 13:18:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:14.047 13:18:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.047 13:18:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.047 13:18:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.047 13:18:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:14.047 13:18:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.047 13:18:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.047 [ 00:08:14.047 { 00:08:14.047 "name": "BaseBdev1", 00:08:14.047 "aliases": [ 00:08:14.047 "7d4429de-d2c7-47df-a8e5-ce23d645158d" 00:08:14.047 ], 00:08:14.047 "product_name": "Malloc disk", 00:08:14.047 "block_size": 512, 00:08:14.047 "num_blocks": 65536, 00:08:14.047 "uuid": "7d4429de-d2c7-47df-a8e5-ce23d645158d", 00:08:14.047 "assigned_rate_limits": { 00:08:14.047 "rw_ios_per_sec": 0, 00:08:14.047 "rw_mbytes_per_sec": 0, 00:08:14.047 "r_mbytes_per_sec": 0, 00:08:14.047 "w_mbytes_per_sec": 0 00:08:14.047 }, 00:08:14.047 "claimed": true, 00:08:14.047 "claim_type": "exclusive_write", 00:08:14.047 "zoned": false, 00:08:14.047 "supported_io_types": { 00:08:14.047 "read": true, 00:08:14.047 "write": true, 00:08:14.047 "unmap": true, 00:08:14.047 "flush": true, 00:08:14.047 "reset": true, 00:08:14.047 "nvme_admin": false, 00:08:14.047 "nvme_io": false, 00:08:14.047 "nvme_io_md": false, 00:08:14.047 "write_zeroes": true, 00:08:14.047 "zcopy": true, 00:08:14.047 "get_zone_info": false, 00:08:14.047 "zone_management": false, 00:08:14.047 "zone_append": false, 00:08:14.047 "compare": false, 00:08:14.047 "compare_and_write": false, 00:08:14.047 "abort": true, 00:08:14.047 "seek_hole": false, 00:08:14.307 "seek_data": false, 00:08:14.307 "copy": true, 00:08:14.307 "nvme_iov_md": false 00:08:14.307 }, 00:08:14.307 "memory_domains": [ 00:08:14.307 { 00:08:14.307 "dma_device_id": "system", 00:08:14.307 "dma_device_type": 1 00:08:14.307 }, 00:08:14.307 { 00:08:14.307 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:14.307 "dma_device_type": 2 00:08:14.307 } 00:08:14.307 ], 00:08:14.307 "driver_specific": {} 00:08:14.307 } 00:08:14.307 ] 00:08:14.307 13:18:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.307 13:18:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:14.307 13:18:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:14.307 13:18:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:14.307 13:18:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:14.307 13:18:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:14.307 13:18:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:14.307 13:18:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:14.307 13:18:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:14.307 13:18:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:14.307 13:18:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:14.307 13:18:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:14.307 13:18:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:14.307 13:18:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:14.307 13:18:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.307 13:18:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.307 13:18:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.307 13:18:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:14.307 "name": "Existed_Raid", 00:08:14.307 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:14.307 "strip_size_kb": 64, 00:08:14.307 "state": "configuring", 00:08:14.307 "raid_level": "raid0", 00:08:14.307 "superblock": false, 00:08:14.307 "num_base_bdevs": 3, 00:08:14.307 "num_base_bdevs_discovered": 2, 00:08:14.307 "num_base_bdevs_operational": 3, 00:08:14.307 "base_bdevs_list": [ 00:08:14.307 { 00:08:14.307 "name": "BaseBdev1", 00:08:14.307 "uuid": "7d4429de-d2c7-47df-a8e5-ce23d645158d", 00:08:14.307 "is_configured": true, 00:08:14.307 "data_offset": 0, 00:08:14.307 "data_size": 65536 00:08:14.307 }, 00:08:14.307 { 00:08:14.307 "name": null, 00:08:14.307 "uuid": "83301a9e-f1aa-4752-ad93-88a2da0a65c3", 00:08:14.307 "is_configured": false, 00:08:14.307 "data_offset": 0, 00:08:14.307 "data_size": 65536 00:08:14.307 }, 00:08:14.307 { 00:08:14.307 "name": "BaseBdev3", 00:08:14.307 "uuid": "ef53e44d-509d-49fd-97f1-b031363bf464", 00:08:14.307 "is_configured": true, 00:08:14.307 "data_offset": 0, 00:08:14.307 "data_size": 65536 00:08:14.307 } 00:08:14.307 ] 00:08:14.307 }' 00:08:14.307 13:18:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:14.308 13:18:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.568 13:18:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:14.568 13:18:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.568 13:18:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.568 13:18:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:14.568 13:18:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.568 13:18:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:14.568 13:18:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:14.568 13:18:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.568 13:18:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.568 [2024-11-17 13:18:03.704358] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:14.568 13:18:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.568 13:18:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:14.568 13:18:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:14.568 13:18:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:14.568 13:18:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:14.568 13:18:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:14.568 13:18:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:14.568 13:18:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:14.568 13:18:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:14.568 13:18:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:14.568 13:18:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:14.568 13:18:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:14.568 13:18:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:14.568 13:18:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.568 13:18:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.568 13:18:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.568 13:18:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:14.568 "name": "Existed_Raid", 00:08:14.568 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:14.568 "strip_size_kb": 64, 00:08:14.568 "state": "configuring", 00:08:14.568 "raid_level": "raid0", 00:08:14.568 "superblock": false, 00:08:14.568 "num_base_bdevs": 3, 00:08:14.568 "num_base_bdevs_discovered": 1, 00:08:14.568 "num_base_bdevs_operational": 3, 00:08:14.568 "base_bdevs_list": [ 00:08:14.568 { 00:08:14.568 "name": "BaseBdev1", 00:08:14.568 "uuid": "7d4429de-d2c7-47df-a8e5-ce23d645158d", 00:08:14.568 "is_configured": true, 00:08:14.568 "data_offset": 0, 00:08:14.568 "data_size": 65536 00:08:14.568 }, 00:08:14.568 { 00:08:14.568 "name": null, 00:08:14.568 "uuid": "83301a9e-f1aa-4752-ad93-88a2da0a65c3", 00:08:14.568 "is_configured": false, 00:08:14.568 "data_offset": 0, 00:08:14.568 "data_size": 65536 00:08:14.568 }, 00:08:14.568 { 00:08:14.568 "name": null, 00:08:14.568 "uuid": "ef53e44d-509d-49fd-97f1-b031363bf464", 00:08:14.568 "is_configured": false, 00:08:14.568 "data_offset": 0, 00:08:14.568 "data_size": 65536 00:08:14.568 } 00:08:14.568 ] 00:08:14.568 }' 00:08:14.568 13:18:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:14.568 13:18:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.136 13:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:15.136 13:18:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.136 13:18:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.136 13:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:15.136 13:18:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.136 13:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:15.136 13:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:15.136 13:18:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.136 13:18:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.136 [2024-11-17 13:18:04.143632] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:15.137 13:18:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.137 13:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:15.137 13:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:15.137 13:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:15.137 13:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:15.137 13:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:15.137 13:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:15.137 13:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:15.137 13:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:15.137 13:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:15.137 13:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:15.137 13:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:15.137 13:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:15.137 13:18:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.137 13:18:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.137 13:18:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.137 13:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:15.137 "name": "Existed_Raid", 00:08:15.137 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:15.137 "strip_size_kb": 64, 00:08:15.137 "state": "configuring", 00:08:15.137 "raid_level": "raid0", 00:08:15.137 "superblock": false, 00:08:15.137 "num_base_bdevs": 3, 00:08:15.137 "num_base_bdevs_discovered": 2, 00:08:15.137 "num_base_bdevs_operational": 3, 00:08:15.137 "base_bdevs_list": [ 00:08:15.137 { 00:08:15.137 "name": "BaseBdev1", 00:08:15.137 "uuid": "7d4429de-d2c7-47df-a8e5-ce23d645158d", 00:08:15.137 "is_configured": true, 00:08:15.137 "data_offset": 0, 00:08:15.137 "data_size": 65536 00:08:15.137 }, 00:08:15.137 { 00:08:15.137 "name": null, 00:08:15.137 "uuid": "83301a9e-f1aa-4752-ad93-88a2da0a65c3", 00:08:15.137 "is_configured": false, 00:08:15.137 "data_offset": 0, 00:08:15.137 "data_size": 65536 00:08:15.137 }, 00:08:15.137 { 00:08:15.137 "name": "BaseBdev3", 00:08:15.137 "uuid": "ef53e44d-509d-49fd-97f1-b031363bf464", 00:08:15.137 "is_configured": true, 00:08:15.137 "data_offset": 0, 00:08:15.137 "data_size": 65536 00:08:15.137 } 00:08:15.137 ] 00:08:15.137 }' 00:08:15.137 13:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:15.137 13:18:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.396 13:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:15.396 13:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:15.396 13:18:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.396 13:18:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.396 13:18:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.396 13:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:15.396 13:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:15.396 13:18:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.396 13:18:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.397 [2024-11-17 13:18:04.618875] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:15.656 13:18:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.656 13:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:15.656 13:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:15.656 13:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:15.656 13:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:15.656 13:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:15.656 13:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:15.656 13:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:15.656 13:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:15.656 13:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:15.656 13:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:15.656 13:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:15.656 13:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:15.656 13:18:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.656 13:18:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.656 13:18:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.656 13:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:15.656 "name": "Existed_Raid", 00:08:15.656 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:15.656 "strip_size_kb": 64, 00:08:15.656 "state": "configuring", 00:08:15.656 "raid_level": "raid0", 00:08:15.656 "superblock": false, 00:08:15.656 "num_base_bdevs": 3, 00:08:15.656 "num_base_bdevs_discovered": 1, 00:08:15.656 "num_base_bdevs_operational": 3, 00:08:15.656 "base_bdevs_list": [ 00:08:15.656 { 00:08:15.656 "name": null, 00:08:15.656 "uuid": "7d4429de-d2c7-47df-a8e5-ce23d645158d", 00:08:15.656 "is_configured": false, 00:08:15.656 "data_offset": 0, 00:08:15.656 "data_size": 65536 00:08:15.656 }, 00:08:15.656 { 00:08:15.656 "name": null, 00:08:15.656 "uuid": "83301a9e-f1aa-4752-ad93-88a2da0a65c3", 00:08:15.656 "is_configured": false, 00:08:15.656 "data_offset": 0, 00:08:15.656 "data_size": 65536 00:08:15.656 }, 00:08:15.656 { 00:08:15.656 "name": "BaseBdev3", 00:08:15.656 "uuid": "ef53e44d-509d-49fd-97f1-b031363bf464", 00:08:15.656 "is_configured": true, 00:08:15.656 "data_offset": 0, 00:08:15.656 "data_size": 65536 00:08:15.656 } 00:08:15.656 ] 00:08:15.656 }' 00:08:15.656 13:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:15.656 13:18:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.225 13:18:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:16.225 13:18:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:16.225 13:18:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.225 13:18:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.225 13:18:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.225 13:18:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:16.225 13:18:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:16.225 13:18:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.225 13:18:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.225 [2024-11-17 13:18:05.192653] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:16.225 13:18:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.225 13:18:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:16.225 13:18:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:16.225 13:18:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:16.225 13:18:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:16.225 13:18:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:16.225 13:18:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:16.225 13:18:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:16.225 13:18:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:16.225 13:18:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:16.225 13:18:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:16.225 13:18:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:16.225 13:18:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:16.225 13:18:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.225 13:18:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.225 13:18:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.225 13:18:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:16.225 "name": "Existed_Raid", 00:08:16.225 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:16.225 "strip_size_kb": 64, 00:08:16.225 "state": "configuring", 00:08:16.225 "raid_level": "raid0", 00:08:16.225 "superblock": false, 00:08:16.225 "num_base_bdevs": 3, 00:08:16.225 "num_base_bdevs_discovered": 2, 00:08:16.225 "num_base_bdevs_operational": 3, 00:08:16.225 "base_bdevs_list": [ 00:08:16.225 { 00:08:16.225 "name": null, 00:08:16.225 "uuid": "7d4429de-d2c7-47df-a8e5-ce23d645158d", 00:08:16.225 "is_configured": false, 00:08:16.225 "data_offset": 0, 00:08:16.225 "data_size": 65536 00:08:16.225 }, 00:08:16.225 { 00:08:16.225 "name": "BaseBdev2", 00:08:16.225 "uuid": "83301a9e-f1aa-4752-ad93-88a2da0a65c3", 00:08:16.225 "is_configured": true, 00:08:16.225 "data_offset": 0, 00:08:16.225 "data_size": 65536 00:08:16.225 }, 00:08:16.225 { 00:08:16.225 "name": "BaseBdev3", 00:08:16.225 "uuid": "ef53e44d-509d-49fd-97f1-b031363bf464", 00:08:16.225 "is_configured": true, 00:08:16.225 "data_offset": 0, 00:08:16.225 "data_size": 65536 00:08:16.225 } 00:08:16.225 ] 00:08:16.225 }' 00:08:16.225 13:18:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:16.225 13:18:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.484 13:18:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:16.484 13:18:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.484 13:18:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.484 13:18:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:16.484 13:18:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.484 13:18:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:08:16.484 13:18:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:16.484 13:18:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:16.484 13:18:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.484 13:18:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.484 13:18:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.744 13:18:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 7d4429de-d2c7-47df-a8e5-ce23d645158d 00:08:16.744 13:18:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.744 13:18:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.744 [2024-11-17 13:18:05.757071] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:16.744 [2024-11-17 13:18:05.757206] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:16.744 [2024-11-17 13:18:05.757255] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:16.745 [2024-11-17 13:18:05.757561] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:16.745 [2024-11-17 13:18:05.757765] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:16.745 [2024-11-17 13:18:05.757809] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:08:16.745 [2024-11-17 13:18:05.758114] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:16.745 NewBaseBdev 00:08:16.745 13:18:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.745 13:18:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:08:16.745 13:18:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:08:16.745 13:18:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:16.745 13:18:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:16.745 13:18:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:16.745 13:18:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:16.745 13:18:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:16.745 13:18:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.745 13:18:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.745 13:18:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.745 13:18:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:16.745 13:18:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.745 13:18:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.745 [ 00:08:16.745 { 00:08:16.745 "name": "NewBaseBdev", 00:08:16.745 "aliases": [ 00:08:16.745 "7d4429de-d2c7-47df-a8e5-ce23d645158d" 00:08:16.745 ], 00:08:16.745 "product_name": "Malloc disk", 00:08:16.745 "block_size": 512, 00:08:16.745 "num_blocks": 65536, 00:08:16.745 "uuid": "7d4429de-d2c7-47df-a8e5-ce23d645158d", 00:08:16.745 "assigned_rate_limits": { 00:08:16.745 "rw_ios_per_sec": 0, 00:08:16.745 "rw_mbytes_per_sec": 0, 00:08:16.745 "r_mbytes_per_sec": 0, 00:08:16.745 "w_mbytes_per_sec": 0 00:08:16.745 }, 00:08:16.745 "claimed": true, 00:08:16.745 "claim_type": "exclusive_write", 00:08:16.745 "zoned": false, 00:08:16.745 "supported_io_types": { 00:08:16.745 "read": true, 00:08:16.745 "write": true, 00:08:16.745 "unmap": true, 00:08:16.745 "flush": true, 00:08:16.745 "reset": true, 00:08:16.745 "nvme_admin": false, 00:08:16.745 "nvme_io": false, 00:08:16.745 "nvme_io_md": false, 00:08:16.745 "write_zeroes": true, 00:08:16.745 "zcopy": true, 00:08:16.745 "get_zone_info": false, 00:08:16.745 "zone_management": false, 00:08:16.745 "zone_append": false, 00:08:16.745 "compare": false, 00:08:16.745 "compare_and_write": false, 00:08:16.745 "abort": true, 00:08:16.745 "seek_hole": false, 00:08:16.745 "seek_data": false, 00:08:16.745 "copy": true, 00:08:16.745 "nvme_iov_md": false 00:08:16.745 }, 00:08:16.745 "memory_domains": [ 00:08:16.745 { 00:08:16.745 "dma_device_id": "system", 00:08:16.745 "dma_device_type": 1 00:08:16.745 }, 00:08:16.745 { 00:08:16.745 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:16.745 "dma_device_type": 2 00:08:16.745 } 00:08:16.745 ], 00:08:16.745 "driver_specific": {} 00:08:16.745 } 00:08:16.745 ] 00:08:16.745 13:18:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.745 13:18:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:16.745 13:18:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:16.745 13:18:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:16.745 13:18:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:16.745 13:18:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:16.745 13:18:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:16.745 13:18:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:16.745 13:18:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:16.745 13:18:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:16.745 13:18:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:16.745 13:18:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:16.745 13:18:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:16.745 13:18:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:16.745 13:18:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.745 13:18:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.745 13:18:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.745 13:18:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:16.745 "name": "Existed_Raid", 00:08:16.745 "uuid": "936e4d4f-8259-464e-8c0c-fb77a0216ec2", 00:08:16.745 "strip_size_kb": 64, 00:08:16.745 "state": "online", 00:08:16.745 "raid_level": "raid0", 00:08:16.745 "superblock": false, 00:08:16.745 "num_base_bdevs": 3, 00:08:16.745 "num_base_bdevs_discovered": 3, 00:08:16.745 "num_base_bdevs_operational": 3, 00:08:16.745 "base_bdevs_list": [ 00:08:16.745 { 00:08:16.745 "name": "NewBaseBdev", 00:08:16.745 "uuid": "7d4429de-d2c7-47df-a8e5-ce23d645158d", 00:08:16.745 "is_configured": true, 00:08:16.745 "data_offset": 0, 00:08:16.745 "data_size": 65536 00:08:16.745 }, 00:08:16.745 { 00:08:16.745 "name": "BaseBdev2", 00:08:16.745 "uuid": "83301a9e-f1aa-4752-ad93-88a2da0a65c3", 00:08:16.745 "is_configured": true, 00:08:16.745 "data_offset": 0, 00:08:16.745 "data_size": 65536 00:08:16.745 }, 00:08:16.745 { 00:08:16.745 "name": "BaseBdev3", 00:08:16.745 "uuid": "ef53e44d-509d-49fd-97f1-b031363bf464", 00:08:16.745 "is_configured": true, 00:08:16.745 "data_offset": 0, 00:08:16.745 "data_size": 65536 00:08:16.745 } 00:08:16.745 ] 00:08:16.745 }' 00:08:16.745 13:18:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:16.745 13:18:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.314 13:18:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:08:17.314 13:18:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:17.314 13:18:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:17.314 13:18:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:17.315 13:18:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:17.315 13:18:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:17.315 13:18:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:17.315 13:18:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.315 13:18:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.315 13:18:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:17.315 [2024-11-17 13:18:06.252731] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:17.315 13:18:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.315 13:18:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:17.315 "name": "Existed_Raid", 00:08:17.315 "aliases": [ 00:08:17.315 "936e4d4f-8259-464e-8c0c-fb77a0216ec2" 00:08:17.315 ], 00:08:17.315 "product_name": "Raid Volume", 00:08:17.315 "block_size": 512, 00:08:17.315 "num_blocks": 196608, 00:08:17.315 "uuid": "936e4d4f-8259-464e-8c0c-fb77a0216ec2", 00:08:17.315 "assigned_rate_limits": { 00:08:17.315 "rw_ios_per_sec": 0, 00:08:17.315 "rw_mbytes_per_sec": 0, 00:08:17.315 "r_mbytes_per_sec": 0, 00:08:17.315 "w_mbytes_per_sec": 0 00:08:17.315 }, 00:08:17.315 "claimed": false, 00:08:17.315 "zoned": false, 00:08:17.315 "supported_io_types": { 00:08:17.315 "read": true, 00:08:17.315 "write": true, 00:08:17.315 "unmap": true, 00:08:17.315 "flush": true, 00:08:17.315 "reset": true, 00:08:17.315 "nvme_admin": false, 00:08:17.315 "nvme_io": false, 00:08:17.315 "nvme_io_md": false, 00:08:17.315 "write_zeroes": true, 00:08:17.315 "zcopy": false, 00:08:17.315 "get_zone_info": false, 00:08:17.315 "zone_management": false, 00:08:17.315 "zone_append": false, 00:08:17.315 "compare": false, 00:08:17.315 "compare_and_write": false, 00:08:17.315 "abort": false, 00:08:17.315 "seek_hole": false, 00:08:17.315 "seek_data": false, 00:08:17.315 "copy": false, 00:08:17.315 "nvme_iov_md": false 00:08:17.315 }, 00:08:17.315 "memory_domains": [ 00:08:17.315 { 00:08:17.315 "dma_device_id": "system", 00:08:17.315 "dma_device_type": 1 00:08:17.315 }, 00:08:17.315 { 00:08:17.315 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:17.315 "dma_device_type": 2 00:08:17.315 }, 00:08:17.315 { 00:08:17.315 "dma_device_id": "system", 00:08:17.315 "dma_device_type": 1 00:08:17.315 }, 00:08:17.315 { 00:08:17.315 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:17.315 "dma_device_type": 2 00:08:17.315 }, 00:08:17.315 { 00:08:17.315 "dma_device_id": "system", 00:08:17.315 "dma_device_type": 1 00:08:17.315 }, 00:08:17.315 { 00:08:17.315 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:17.315 "dma_device_type": 2 00:08:17.315 } 00:08:17.315 ], 00:08:17.315 "driver_specific": { 00:08:17.315 "raid": { 00:08:17.315 "uuid": "936e4d4f-8259-464e-8c0c-fb77a0216ec2", 00:08:17.315 "strip_size_kb": 64, 00:08:17.315 "state": "online", 00:08:17.315 "raid_level": "raid0", 00:08:17.315 "superblock": false, 00:08:17.315 "num_base_bdevs": 3, 00:08:17.315 "num_base_bdevs_discovered": 3, 00:08:17.315 "num_base_bdevs_operational": 3, 00:08:17.315 "base_bdevs_list": [ 00:08:17.315 { 00:08:17.315 "name": "NewBaseBdev", 00:08:17.315 "uuid": "7d4429de-d2c7-47df-a8e5-ce23d645158d", 00:08:17.315 "is_configured": true, 00:08:17.315 "data_offset": 0, 00:08:17.315 "data_size": 65536 00:08:17.315 }, 00:08:17.315 { 00:08:17.315 "name": "BaseBdev2", 00:08:17.315 "uuid": "83301a9e-f1aa-4752-ad93-88a2da0a65c3", 00:08:17.315 "is_configured": true, 00:08:17.315 "data_offset": 0, 00:08:17.315 "data_size": 65536 00:08:17.315 }, 00:08:17.315 { 00:08:17.315 "name": "BaseBdev3", 00:08:17.315 "uuid": "ef53e44d-509d-49fd-97f1-b031363bf464", 00:08:17.315 "is_configured": true, 00:08:17.315 "data_offset": 0, 00:08:17.315 "data_size": 65536 00:08:17.315 } 00:08:17.315 ] 00:08:17.315 } 00:08:17.315 } 00:08:17.315 }' 00:08:17.315 13:18:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:17.315 13:18:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:08:17.315 BaseBdev2 00:08:17.315 BaseBdev3' 00:08:17.315 13:18:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:17.315 13:18:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:17.315 13:18:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:17.315 13:18:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:08:17.315 13:18:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.315 13:18:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.315 13:18:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:17.315 13:18:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.315 13:18:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:17.315 13:18:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:17.315 13:18:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:17.315 13:18:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:17.315 13:18:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.315 13:18:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:17.315 13:18:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.315 13:18:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.315 13:18:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:17.315 13:18:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:17.315 13:18:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:17.315 13:18:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:17.315 13:18:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.315 13:18:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:17.315 13:18:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.315 13:18:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.315 13:18:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:17.315 13:18:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:17.315 13:18:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:17.315 13:18:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.315 13:18:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.315 [2024-11-17 13:18:06.515870] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:17.315 [2024-11-17 13:18:06.515900] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:17.315 [2024-11-17 13:18:06.515986] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:17.315 [2024-11-17 13:18:06.516041] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:17.315 [2024-11-17 13:18:06.516053] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:08:17.315 13:18:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.315 13:18:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 63768 00:08:17.315 13:18:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 63768 ']' 00:08:17.315 13:18:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 63768 00:08:17.315 13:18:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:08:17.315 13:18:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:17.315 13:18:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63768 00:08:17.575 13:18:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:17.575 13:18:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:17.575 13:18:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63768' 00:08:17.575 killing process with pid 63768 00:08:17.575 13:18:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 63768 00:08:17.575 [2024-11-17 13:18:06.562070] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:17.575 13:18:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 63768 00:08:17.834 [2024-11-17 13:18:06.859471] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:18.774 13:18:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:08:18.774 00:08:18.774 real 0m10.128s 00:08:18.774 user 0m16.082s 00:08:18.774 sys 0m1.661s 00:08:18.774 13:18:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:18.774 ************************************ 00:08:18.774 END TEST raid_state_function_test 00:08:18.774 ************************************ 00:08:18.774 13:18:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.774 13:18:07 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:08:18.774 13:18:07 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:18.774 13:18:07 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:18.774 13:18:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:19.035 ************************************ 00:08:19.035 START TEST raid_state_function_test_sb 00:08:19.035 ************************************ 00:08:19.035 13:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 3 true 00:08:19.035 13:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:08:19.035 13:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:19.035 13:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:08:19.035 13:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:19.035 13:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:19.035 13:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:19.035 13:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:19.035 13:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:19.035 13:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:19.035 13:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:19.035 13:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:19.035 13:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:19.035 13:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:19.035 13:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:19.035 13:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:19.035 13:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:19.035 13:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:19.035 13:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:19.035 13:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:19.035 13:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:19.035 13:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:19.035 13:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:08:19.035 13:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:19.035 13:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:19.035 13:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:08:19.035 13:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:08:19.035 13:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=64379 00:08:19.035 13:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:19.035 13:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 64379' 00:08:19.035 Process raid pid: 64379 00:08:19.035 13:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 64379 00:08:19.035 13:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 64379 ']' 00:08:19.035 13:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:19.035 13:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:19.035 13:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:19.035 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:19.035 13:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:19.035 13:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:19.035 [2024-11-17 13:18:08.100089] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:08:19.035 [2024-11-17 13:18:08.100279] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:19.296 [2024-11-17 13:18:08.275238] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:19.296 [2024-11-17 13:18:08.388530] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:19.554 [2024-11-17 13:18:08.595892] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:19.554 [2024-11-17 13:18:08.595987] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:19.814 13:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:19.814 13:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:08:19.814 13:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:19.814 13:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.814 13:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:19.814 [2024-11-17 13:18:08.939695] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:19.814 [2024-11-17 13:18:08.939833] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:19.814 [2024-11-17 13:18:08.939865] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:19.814 [2024-11-17 13:18:08.939889] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:19.814 [2024-11-17 13:18:08.939907] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:19.814 [2024-11-17 13:18:08.939927] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:19.814 13:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.814 13:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:19.814 13:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:19.814 13:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:19.814 13:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:19.814 13:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:19.814 13:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:19.814 13:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:19.814 13:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:19.814 13:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:19.814 13:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:19.814 13:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:19.815 13:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:19.815 13:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.815 13:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:19.815 13:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.815 13:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:19.815 "name": "Existed_Raid", 00:08:19.815 "uuid": "6dac1c38-fc92-4b6f-a7b2-24cb9a235111", 00:08:19.815 "strip_size_kb": 64, 00:08:19.815 "state": "configuring", 00:08:19.815 "raid_level": "raid0", 00:08:19.815 "superblock": true, 00:08:19.815 "num_base_bdevs": 3, 00:08:19.815 "num_base_bdevs_discovered": 0, 00:08:19.815 "num_base_bdevs_operational": 3, 00:08:19.815 "base_bdevs_list": [ 00:08:19.815 { 00:08:19.815 "name": "BaseBdev1", 00:08:19.815 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:19.815 "is_configured": false, 00:08:19.815 "data_offset": 0, 00:08:19.815 "data_size": 0 00:08:19.815 }, 00:08:19.815 { 00:08:19.815 "name": "BaseBdev2", 00:08:19.815 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:19.815 "is_configured": false, 00:08:19.815 "data_offset": 0, 00:08:19.815 "data_size": 0 00:08:19.815 }, 00:08:19.815 { 00:08:19.815 "name": "BaseBdev3", 00:08:19.815 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:19.815 "is_configured": false, 00:08:19.815 "data_offset": 0, 00:08:19.815 "data_size": 0 00:08:19.815 } 00:08:19.815 ] 00:08:19.815 }' 00:08:19.815 13:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:19.815 13:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:20.385 13:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:20.385 13:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.385 13:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:20.385 [2024-11-17 13:18:09.402842] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:20.385 [2024-11-17 13:18:09.402880] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:20.385 13:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.385 13:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:20.385 13:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.385 13:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:20.385 [2024-11-17 13:18:09.414825] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:20.385 [2024-11-17 13:18:09.414916] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:20.385 [2024-11-17 13:18:09.414945] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:20.385 [2024-11-17 13:18:09.414968] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:20.385 [2024-11-17 13:18:09.414987] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:20.385 [2024-11-17 13:18:09.415009] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:20.385 13:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.385 13:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:20.385 13:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.385 13:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:20.385 [2024-11-17 13:18:09.462383] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:20.385 BaseBdev1 00:08:20.385 13:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.385 13:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:20.385 13:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:20.385 13:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:20.385 13:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:20.385 13:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:20.385 13:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:20.385 13:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:20.386 13:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.386 13:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:20.386 13:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.386 13:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:20.386 13:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.386 13:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:20.386 [ 00:08:20.386 { 00:08:20.386 "name": "BaseBdev1", 00:08:20.386 "aliases": [ 00:08:20.386 "036f551e-abba-472f-9bdc-b70a71a848db" 00:08:20.386 ], 00:08:20.386 "product_name": "Malloc disk", 00:08:20.386 "block_size": 512, 00:08:20.386 "num_blocks": 65536, 00:08:20.386 "uuid": "036f551e-abba-472f-9bdc-b70a71a848db", 00:08:20.386 "assigned_rate_limits": { 00:08:20.386 "rw_ios_per_sec": 0, 00:08:20.386 "rw_mbytes_per_sec": 0, 00:08:20.386 "r_mbytes_per_sec": 0, 00:08:20.386 "w_mbytes_per_sec": 0 00:08:20.386 }, 00:08:20.386 "claimed": true, 00:08:20.386 "claim_type": "exclusive_write", 00:08:20.386 "zoned": false, 00:08:20.386 "supported_io_types": { 00:08:20.386 "read": true, 00:08:20.386 "write": true, 00:08:20.386 "unmap": true, 00:08:20.386 "flush": true, 00:08:20.386 "reset": true, 00:08:20.386 "nvme_admin": false, 00:08:20.386 "nvme_io": false, 00:08:20.386 "nvme_io_md": false, 00:08:20.386 "write_zeroes": true, 00:08:20.386 "zcopy": true, 00:08:20.386 "get_zone_info": false, 00:08:20.386 "zone_management": false, 00:08:20.386 "zone_append": false, 00:08:20.386 "compare": false, 00:08:20.386 "compare_and_write": false, 00:08:20.386 "abort": true, 00:08:20.386 "seek_hole": false, 00:08:20.386 "seek_data": false, 00:08:20.386 "copy": true, 00:08:20.386 "nvme_iov_md": false 00:08:20.386 }, 00:08:20.386 "memory_domains": [ 00:08:20.386 { 00:08:20.386 "dma_device_id": "system", 00:08:20.386 "dma_device_type": 1 00:08:20.386 }, 00:08:20.386 { 00:08:20.386 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:20.386 "dma_device_type": 2 00:08:20.386 } 00:08:20.386 ], 00:08:20.386 "driver_specific": {} 00:08:20.386 } 00:08:20.386 ] 00:08:20.386 13:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.386 13:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:20.386 13:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:20.386 13:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:20.386 13:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:20.386 13:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:20.386 13:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:20.386 13:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:20.386 13:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:20.386 13:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:20.386 13:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:20.386 13:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:20.386 13:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:20.386 13:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:20.386 13:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.386 13:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:20.386 13:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.386 13:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:20.386 "name": "Existed_Raid", 00:08:20.386 "uuid": "b60b0201-8c0c-469e-995e-ef2e1582e5c7", 00:08:20.386 "strip_size_kb": 64, 00:08:20.386 "state": "configuring", 00:08:20.386 "raid_level": "raid0", 00:08:20.386 "superblock": true, 00:08:20.386 "num_base_bdevs": 3, 00:08:20.386 "num_base_bdevs_discovered": 1, 00:08:20.386 "num_base_bdevs_operational": 3, 00:08:20.386 "base_bdevs_list": [ 00:08:20.386 { 00:08:20.386 "name": "BaseBdev1", 00:08:20.386 "uuid": "036f551e-abba-472f-9bdc-b70a71a848db", 00:08:20.386 "is_configured": true, 00:08:20.386 "data_offset": 2048, 00:08:20.386 "data_size": 63488 00:08:20.386 }, 00:08:20.386 { 00:08:20.386 "name": "BaseBdev2", 00:08:20.386 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:20.386 "is_configured": false, 00:08:20.386 "data_offset": 0, 00:08:20.386 "data_size": 0 00:08:20.386 }, 00:08:20.386 { 00:08:20.386 "name": "BaseBdev3", 00:08:20.386 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:20.386 "is_configured": false, 00:08:20.386 "data_offset": 0, 00:08:20.386 "data_size": 0 00:08:20.386 } 00:08:20.386 ] 00:08:20.386 }' 00:08:20.386 13:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:20.386 13:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:20.955 13:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:20.955 13:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.955 13:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:20.955 [2024-11-17 13:18:09.945614] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:20.955 [2024-11-17 13:18:09.945737] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:20.955 13:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.955 13:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:20.955 13:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.955 13:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:20.955 [2024-11-17 13:18:09.953657] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:20.955 [2024-11-17 13:18:09.955643] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:20.955 [2024-11-17 13:18:09.955690] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:20.956 [2024-11-17 13:18:09.955701] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:20.956 [2024-11-17 13:18:09.955710] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:20.956 13:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.956 13:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:20.956 13:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:20.956 13:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:20.956 13:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:20.956 13:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:20.956 13:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:20.956 13:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:20.956 13:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:20.956 13:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:20.956 13:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:20.956 13:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:20.956 13:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:20.956 13:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:20.956 13:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:20.956 13:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.956 13:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:20.956 13:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.956 13:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:20.956 "name": "Existed_Raid", 00:08:20.956 "uuid": "50e2ace4-a619-4120-a34d-7204ec58f53c", 00:08:20.956 "strip_size_kb": 64, 00:08:20.956 "state": "configuring", 00:08:20.956 "raid_level": "raid0", 00:08:20.956 "superblock": true, 00:08:20.956 "num_base_bdevs": 3, 00:08:20.956 "num_base_bdevs_discovered": 1, 00:08:20.956 "num_base_bdevs_operational": 3, 00:08:20.956 "base_bdevs_list": [ 00:08:20.956 { 00:08:20.956 "name": "BaseBdev1", 00:08:20.956 "uuid": "036f551e-abba-472f-9bdc-b70a71a848db", 00:08:20.956 "is_configured": true, 00:08:20.956 "data_offset": 2048, 00:08:20.956 "data_size": 63488 00:08:20.956 }, 00:08:20.956 { 00:08:20.956 "name": "BaseBdev2", 00:08:20.956 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:20.956 "is_configured": false, 00:08:20.956 "data_offset": 0, 00:08:20.956 "data_size": 0 00:08:20.956 }, 00:08:20.956 { 00:08:20.956 "name": "BaseBdev3", 00:08:20.956 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:20.956 "is_configured": false, 00:08:20.956 "data_offset": 0, 00:08:20.956 "data_size": 0 00:08:20.956 } 00:08:20.956 ] 00:08:20.956 }' 00:08:20.956 13:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:20.956 13:18:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:21.216 13:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:21.216 13:18:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.216 13:18:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:21.216 [2024-11-17 13:18:10.434491] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:21.216 BaseBdev2 00:08:21.216 13:18:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.216 13:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:21.216 13:18:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:21.216 13:18:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:21.216 13:18:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:21.216 13:18:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:21.216 13:18:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:21.216 13:18:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:21.216 13:18:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.216 13:18:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:21.476 13:18:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.476 13:18:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:21.476 13:18:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.476 13:18:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:21.476 [ 00:08:21.476 { 00:08:21.476 "name": "BaseBdev2", 00:08:21.476 "aliases": [ 00:08:21.476 "873fe236-4867-4dae-a7d0-bd2f6dcecac6" 00:08:21.476 ], 00:08:21.476 "product_name": "Malloc disk", 00:08:21.476 "block_size": 512, 00:08:21.476 "num_blocks": 65536, 00:08:21.476 "uuid": "873fe236-4867-4dae-a7d0-bd2f6dcecac6", 00:08:21.476 "assigned_rate_limits": { 00:08:21.476 "rw_ios_per_sec": 0, 00:08:21.476 "rw_mbytes_per_sec": 0, 00:08:21.476 "r_mbytes_per_sec": 0, 00:08:21.476 "w_mbytes_per_sec": 0 00:08:21.476 }, 00:08:21.476 "claimed": true, 00:08:21.476 "claim_type": "exclusive_write", 00:08:21.476 "zoned": false, 00:08:21.476 "supported_io_types": { 00:08:21.476 "read": true, 00:08:21.476 "write": true, 00:08:21.476 "unmap": true, 00:08:21.476 "flush": true, 00:08:21.476 "reset": true, 00:08:21.476 "nvme_admin": false, 00:08:21.476 "nvme_io": false, 00:08:21.476 "nvme_io_md": false, 00:08:21.476 "write_zeroes": true, 00:08:21.476 "zcopy": true, 00:08:21.476 "get_zone_info": false, 00:08:21.476 "zone_management": false, 00:08:21.476 "zone_append": false, 00:08:21.476 "compare": false, 00:08:21.476 "compare_and_write": false, 00:08:21.476 "abort": true, 00:08:21.476 "seek_hole": false, 00:08:21.476 "seek_data": false, 00:08:21.476 "copy": true, 00:08:21.476 "nvme_iov_md": false 00:08:21.476 }, 00:08:21.476 "memory_domains": [ 00:08:21.476 { 00:08:21.476 "dma_device_id": "system", 00:08:21.476 "dma_device_type": 1 00:08:21.476 }, 00:08:21.476 { 00:08:21.476 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:21.476 "dma_device_type": 2 00:08:21.476 } 00:08:21.476 ], 00:08:21.476 "driver_specific": {} 00:08:21.476 } 00:08:21.476 ] 00:08:21.476 13:18:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.476 13:18:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:21.476 13:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:21.476 13:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:21.476 13:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:21.476 13:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:21.476 13:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:21.476 13:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:21.476 13:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:21.476 13:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:21.476 13:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:21.476 13:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:21.476 13:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:21.476 13:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:21.476 13:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:21.476 13:18:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.476 13:18:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:21.476 13:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:21.476 13:18:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.476 13:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:21.476 "name": "Existed_Raid", 00:08:21.476 "uuid": "50e2ace4-a619-4120-a34d-7204ec58f53c", 00:08:21.476 "strip_size_kb": 64, 00:08:21.476 "state": "configuring", 00:08:21.476 "raid_level": "raid0", 00:08:21.476 "superblock": true, 00:08:21.476 "num_base_bdevs": 3, 00:08:21.476 "num_base_bdevs_discovered": 2, 00:08:21.476 "num_base_bdevs_operational": 3, 00:08:21.476 "base_bdevs_list": [ 00:08:21.476 { 00:08:21.476 "name": "BaseBdev1", 00:08:21.476 "uuid": "036f551e-abba-472f-9bdc-b70a71a848db", 00:08:21.476 "is_configured": true, 00:08:21.476 "data_offset": 2048, 00:08:21.476 "data_size": 63488 00:08:21.476 }, 00:08:21.476 { 00:08:21.476 "name": "BaseBdev2", 00:08:21.476 "uuid": "873fe236-4867-4dae-a7d0-bd2f6dcecac6", 00:08:21.476 "is_configured": true, 00:08:21.476 "data_offset": 2048, 00:08:21.476 "data_size": 63488 00:08:21.476 }, 00:08:21.476 { 00:08:21.476 "name": "BaseBdev3", 00:08:21.476 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:21.476 "is_configured": false, 00:08:21.476 "data_offset": 0, 00:08:21.476 "data_size": 0 00:08:21.476 } 00:08:21.476 ] 00:08:21.476 }' 00:08:21.476 13:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:21.476 13:18:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:21.736 13:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:21.736 13:18:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.736 13:18:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:21.736 [2024-11-17 13:18:10.923698] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:21.736 [2024-11-17 13:18:10.924107] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:21.736 [2024-11-17 13:18:10.924177] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:21.736 [2024-11-17 13:18:10.924538] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:21.736 BaseBdev3 00:08:21.736 [2024-11-17 13:18:10.924765] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:21.736 [2024-11-17 13:18:10.924813] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:21.736 13:18:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.736 [2024-11-17 13:18:10.925051] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:21.736 13:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:21.736 13:18:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:21.736 13:18:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:21.736 13:18:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:21.736 13:18:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:21.736 13:18:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:21.736 13:18:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:21.736 13:18:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.736 13:18:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:21.736 13:18:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.736 13:18:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:21.736 13:18:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.736 13:18:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:21.736 [ 00:08:21.736 { 00:08:21.736 "name": "BaseBdev3", 00:08:21.736 "aliases": [ 00:08:21.736 "abde36ae-1762-4f3e-8f9d-9aec258e5668" 00:08:21.736 ], 00:08:21.736 "product_name": "Malloc disk", 00:08:21.736 "block_size": 512, 00:08:21.736 "num_blocks": 65536, 00:08:21.736 "uuid": "abde36ae-1762-4f3e-8f9d-9aec258e5668", 00:08:21.736 "assigned_rate_limits": { 00:08:21.736 "rw_ios_per_sec": 0, 00:08:21.736 "rw_mbytes_per_sec": 0, 00:08:21.736 "r_mbytes_per_sec": 0, 00:08:21.736 "w_mbytes_per_sec": 0 00:08:21.736 }, 00:08:21.736 "claimed": true, 00:08:21.736 "claim_type": "exclusive_write", 00:08:21.736 "zoned": false, 00:08:21.736 "supported_io_types": { 00:08:21.736 "read": true, 00:08:21.736 "write": true, 00:08:21.736 "unmap": true, 00:08:21.736 "flush": true, 00:08:21.736 "reset": true, 00:08:21.736 "nvme_admin": false, 00:08:21.736 "nvme_io": false, 00:08:21.736 "nvme_io_md": false, 00:08:21.736 "write_zeroes": true, 00:08:21.736 "zcopy": true, 00:08:21.736 "get_zone_info": false, 00:08:21.736 "zone_management": false, 00:08:21.736 "zone_append": false, 00:08:21.736 "compare": false, 00:08:21.736 "compare_and_write": false, 00:08:21.736 "abort": true, 00:08:21.736 "seek_hole": false, 00:08:21.736 "seek_data": false, 00:08:21.736 "copy": true, 00:08:21.736 "nvme_iov_md": false 00:08:21.736 }, 00:08:21.736 "memory_domains": [ 00:08:21.736 { 00:08:21.736 "dma_device_id": "system", 00:08:21.736 "dma_device_type": 1 00:08:21.736 }, 00:08:21.736 { 00:08:21.736 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:21.736 "dma_device_type": 2 00:08:21.736 } 00:08:21.736 ], 00:08:21.736 "driver_specific": {} 00:08:21.736 } 00:08:21.736 ] 00:08:21.736 13:18:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.736 13:18:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:21.736 13:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:21.736 13:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:21.736 13:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:21.736 13:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:21.736 13:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:21.736 13:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:21.736 13:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:21.736 13:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:21.736 13:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:21.736 13:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:21.736 13:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:21.736 13:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:21.736 13:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:21.736 13:18:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.736 13:18:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:21.736 13:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:21.996 13:18:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.996 13:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:21.996 "name": "Existed_Raid", 00:08:21.996 "uuid": "50e2ace4-a619-4120-a34d-7204ec58f53c", 00:08:21.996 "strip_size_kb": 64, 00:08:21.996 "state": "online", 00:08:21.996 "raid_level": "raid0", 00:08:21.996 "superblock": true, 00:08:21.996 "num_base_bdevs": 3, 00:08:21.996 "num_base_bdevs_discovered": 3, 00:08:21.996 "num_base_bdevs_operational": 3, 00:08:21.996 "base_bdevs_list": [ 00:08:21.996 { 00:08:21.996 "name": "BaseBdev1", 00:08:21.996 "uuid": "036f551e-abba-472f-9bdc-b70a71a848db", 00:08:21.996 "is_configured": true, 00:08:21.996 "data_offset": 2048, 00:08:21.996 "data_size": 63488 00:08:21.996 }, 00:08:21.996 { 00:08:21.996 "name": "BaseBdev2", 00:08:21.996 "uuid": "873fe236-4867-4dae-a7d0-bd2f6dcecac6", 00:08:21.996 "is_configured": true, 00:08:21.996 "data_offset": 2048, 00:08:21.996 "data_size": 63488 00:08:21.996 }, 00:08:21.996 { 00:08:21.996 "name": "BaseBdev3", 00:08:21.996 "uuid": "abde36ae-1762-4f3e-8f9d-9aec258e5668", 00:08:21.996 "is_configured": true, 00:08:21.996 "data_offset": 2048, 00:08:21.996 "data_size": 63488 00:08:21.996 } 00:08:21.996 ] 00:08:21.996 }' 00:08:21.996 13:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:21.997 13:18:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.257 13:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:22.257 13:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:22.257 13:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:22.257 13:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:22.257 13:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:22.257 13:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:22.257 13:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:22.257 13:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:22.257 13:18:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.257 13:18:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.257 [2024-11-17 13:18:11.375317] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:22.257 13:18:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.257 13:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:22.257 "name": "Existed_Raid", 00:08:22.257 "aliases": [ 00:08:22.257 "50e2ace4-a619-4120-a34d-7204ec58f53c" 00:08:22.257 ], 00:08:22.257 "product_name": "Raid Volume", 00:08:22.257 "block_size": 512, 00:08:22.257 "num_blocks": 190464, 00:08:22.257 "uuid": "50e2ace4-a619-4120-a34d-7204ec58f53c", 00:08:22.257 "assigned_rate_limits": { 00:08:22.257 "rw_ios_per_sec": 0, 00:08:22.257 "rw_mbytes_per_sec": 0, 00:08:22.257 "r_mbytes_per_sec": 0, 00:08:22.257 "w_mbytes_per_sec": 0 00:08:22.257 }, 00:08:22.257 "claimed": false, 00:08:22.257 "zoned": false, 00:08:22.257 "supported_io_types": { 00:08:22.257 "read": true, 00:08:22.257 "write": true, 00:08:22.257 "unmap": true, 00:08:22.257 "flush": true, 00:08:22.257 "reset": true, 00:08:22.257 "nvme_admin": false, 00:08:22.257 "nvme_io": false, 00:08:22.257 "nvme_io_md": false, 00:08:22.257 "write_zeroes": true, 00:08:22.257 "zcopy": false, 00:08:22.257 "get_zone_info": false, 00:08:22.257 "zone_management": false, 00:08:22.257 "zone_append": false, 00:08:22.257 "compare": false, 00:08:22.257 "compare_and_write": false, 00:08:22.257 "abort": false, 00:08:22.257 "seek_hole": false, 00:08:22.257 "seek_data": false, 00:08:22.257 "copy": false, 00:08:22.257 "nvme_iov_md": false 00:08:22.257 }, 00:08:22.257 "memory_domains": [ 00:08:22.257 { 00:08:22.257 "dma_device_id": "system", 00:08:22.257 "dma_device_type": 1 00:08:22.257 }, 00:08:22.257 { 00:08:22.257 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:22.257 "dma_device_type": 2 00:08:22.257 }, 00:08:22.257 { 00:08:22.257 "dma_device_id": "system", 00:08:22.257 "dma_device_type": 1 00:08:22.257 }, 00:08:22.257 { 00:08:22.257 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:22.257 "dma_device_type": 2 00:08:22.257 }, 00:08:22.257 { 00:08:22.257 "dma_device_id": "system", 00:08:22.257 "dma_device_type": 1 00:08:22.257 }, 00:08:22.257 { 00:08:22.257 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:22.257 "dma_device_type": 2 00:08:22.257 } 00:08:22.257 ], 00:08:22.257 "driver_specific": { 00:08:22.257 "raid": { 00:08:22.257 "uuid": "50e2ace4-a619-4120-a34d-7204ec58f53c", 00:08:22.257 "strip_size_kb": 64, 00:08:22.257 "state": "online", 00:08:22.257 "raid_level": "raid0", 00:08:22.257 "superblock": true, 00:08:22.257 "num_base_bdevs": 3, 00:08:22.257 "num_base_bdevs_discovered": 3, 00:08:22.257 "num_base_bdevs_operational": 3, 00:08:22.257 "base_bdevs_list": [ 00:08:22.257 { 00:08:22.257 "name": "BaseBdev1", 00:08:22.257 "uuid": "036f551e-abba-472f-9bdc-b70a71a848db", 00:08:22.257 "is_configured": true, 00:08:22.257 "data_offset": 2048, 00:08:22.257 "data_size": 63488 00:08:22.257 }, 00:08:22.257 { 00:08:22.258 "name": "BaseBdev2", 00:08:22.258 "uuid": "873fe236-4867-4dae-a7d0-bd2f6dcecac6", 00:08:22.258 "is_configured": true, 00:08:22.258 "data_offset": 2048, 00:08:22.258 "data_size": 63488 00:08:22.258 }, 00:08:22.258 { 00:08:22.258 "name": "BaseBdev3", 00:08:22.258 "uuid": "abde36ae-1762-4f3e-8f9d-9aec258e5668", 00:08:22.258 "is_configured": true, 00:08:22.258 "data_offset": 2048, 00:08:22.258 "data_size": 63488 00:08:22.258 } 00:08:22.258 ] 00:08:22.258 } 00:08:22.258 } 00:08:22.258 }' 00:08:22.258 13:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:22.258 13:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:22.258 BaseBdev2 00:08:22.258 BaseBdev3' 00:08:22.258 13:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:22.519 13:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:22.519 13:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:22.519 13:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:22.519 13:18:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.519 13:18:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.519 13:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:22.519 13:18:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.519 13:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:22.519 13:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:22.519 13:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:22.519 13:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:22.519 13:18:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.519 13:18:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.519 13:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:22.519 13:18:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.519 13:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:22.519 13:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:22.519 13:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:22.519 13:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:22.519 13:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:22.519 13:18:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.519 13:18:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.519 13:18:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.519 13:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:22.519 13:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:22.519 13:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:22.519 13:18:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.519 13:18:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.519 [2024-11-17 13:18:11.622623] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:22.519 [2024-11-17 13:18:11.622652] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:22.519 [2024-11-17 13:18:11.622704] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:22.519 13:18:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.519 13:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:22.519 13:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:08:22.519 13:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:22.519 13:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:08:22.519 13:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:22.519 13:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:08:22.519 13:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:22.519 13:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:22.519 13:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:22.519 13:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:22.519 13:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:22.519 13:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:22.519 13:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:22.519 13:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:22.519 13:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:22.519 13:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:22.519 13:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:22.519 13:18:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.519 13:18:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.779 13:18:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.779 13:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:22.779 "name": "Existed_Raid", 00:08:22.779 "uuid": "50e2ace4-a619-4120-a34d-7204ec58f53c", 00:08:22.779 "strip_size_kb": 64, 00:08:22.779 "state": "offline", 00:08:22.779 "raid_level": "raid0", 00:08:22.779 "superblock": true, 00:08:22.779 "num_base_bdevs": 3, 00:08:22.779 "num_base_bdevs_discovered": 2, 00:08:22.779 "num_base_bdevs_operational": 2, 00:08:22.779 "base_bdevs_list": [ 00:08:22.779 { 00:08:22.779 "name": null, 00:08:22.779 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:22.779 "is_configured": false, 00:08:22.779 "data_offset": 0, 00:08:22.779 "data_size": 63488 00:08:22.779 }, 00:08:22.779 { 00:08:22.779 "name": "BaseBdev2", 00:08:22.779 "uuid": "873fe236-4867-4dae-a7d0-bd2f6dcecac6", 00:08:22.779 "is_configured": true, 00:08:22.779 "data_offset": 2048, 00:08:22.779 "data_size": 63488 00:08:22.779 }, 00:08:22.779 { 00:08:22.779 "name": "BaseBdev3", 00:08:22.779 "uuid": "abde36ae-1762-4f3e-8f9d-9aec258e5668", 00:08:22.779 "is_configured": true, 00:08:22.779 "data_offset": 2048, 00:08:22.779 "data_size": 63488 00:08:22.779 } 00:08:22.779 ] 00:08:22.779 }' 00:08:22.779 13:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:22.779 13:18:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.039 13:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:23.039 13:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:23.039 13:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:23.039 13:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.039 13:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.039 13:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:23.039 13:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.039 13:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:23.039 13:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:23.039 13:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:23.039 13:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.039 13:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.039 [2024-11-17 13:18:12.218267] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:23.299 13:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.299 13:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:23.299 13:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:23.299 13:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:23.299 13:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.299 13:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:23.299 13:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.299 13:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.299 13:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:23.299 13:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:23.299 13:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:23.299 13:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.299 13:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.299 [2024-11-17 13:18:12.373386] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:23.300 [2024-11-17 13:18:12.373436] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:23.300 13:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.300 13:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:23.300 13:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:23.300 13:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:23.300 13:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:23.300 13:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.300 13:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.300 13:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.300 13:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:23.300 13:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:23.300 13:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:23.300 13:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:23.300 13:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:23.300 13:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:23.300 13:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.300 13:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.559 BaseBdev2 00:08:23.559 13:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.559 13:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:23.559 13:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:23.559 13:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:23.559 13:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:23.559 13:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:23.560 13:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:23.560 13:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:23.560 13:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.560 13:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.560 13:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.560 13:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:23.560 13:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.560 13:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.560 [ 00:08:23.560 { 00:08:23.560 "name": "BaseBdev2", 00:08:23.560 "aliases": [ 00:08:23.560 "506433d8-f5b7-49d9-b772-a88c582c515d" 00:08:23.560 ], 00:08:23.560 "product_name": "Malloc disk", 00:08:23.560 "block_size": 512, 00:08:23.560 "num_blocks": 65536, 00:08:23.560 "uuid": "506433d8-f5b7-49d9-b772-a88c582c515d", 00:08:23.560 "assigned_rate_limits": { 00:08:23.560 "rw_ios_per_sec": 0, 00:08:23.560 "rw_mbytes_per_sec": 0, 00:08:23.560 "r_mbytes_per_sec": 0, 00:08:23.560 "w_mbytes_per_sec": 0 00:08:23.560 }, 00:08:23.560 "claimed": false, 00:08:23.560 "zoned": false, 00:08:23.560 "supported_io_types": { 00:08:23.560 "read": true, 00:08:23.560 "write": true, 00:08:23.560 "unmap": true, 00:08:23.560 "flush": true, 00:08:23.560 "reset": true, 00:08:23.560 "nvme_admin": false, 00:08:23.560 "nvme_io": false, 00:08:23.560 "nvme_io_md": false, 00:08:23.560 "write_zeroes": true, 00:08:23.560 "zcopy": true, 00:08:23.560 "get_zone_info": false, 00:08:23.560 "zone_management": false, 00:08:23.560 "zone_append": false, 00:08:23.560 "compare": false, 00:08:23.560 "compare_and_write": false, 00:08:23.560 "abort": true, 00:08:23.560 "seek_hole": false, 00:08:23.560 "seek_data": false, 00:08:23.560 "copy": true, 00:08:23.560 "nvme_iov_md": false 00:08:23.560 }, 00:08:23.560 "memory_domains": [ 00:08:23.560 { 00:08:23.560 "dma_device_id": "system", 00:08:23.560 "dma_device_type": 1 00:08:23.560 }, 00:08:23.560 { 00:08:23.560 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:23.560 "dma_device_type": 2 00:08:23.560 } 00:08:23.560 ], 00:08:23.560 "driver_specific": {} 00:08:23.560 } 00:08:23.560 ] 00:08:23.560 13:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.560 13:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:23.560 13:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:23.560 13:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:23.560 13:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:23.560 13:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.560 13:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.560 BaseBdev3 00:08:23.560 13:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.560 13:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:23.560 13:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:23.560 13:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:23.560 13:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:23.560 13:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:23.560 13:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:23.560 13:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:23.560 13:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.560 13:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.560 13:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.560 13:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:23.560 13:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.560 13:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.560 [ 00:08:23.560 { 00:08:23.560 "name": "BaseBdev3", 00:08:23.560 "aliases": [ 00:08:23.560 "47342608-d423-415b-9bcd-2e3b5d7d65e2" 00:08:23.560 ], 00:08:23.560 "product_name": "Malloc disk", 00:08:23.560 "block_size": 512, 00:08:23.560 "num_blocks": 65536, 00:08:23.560 "uuid": "47342608-d423-415b-9bcd-2e3b5d7d65e2", 00:08:23.560 "assigned_rate_limits": { 00:08:23.560 "rw_ios_per_sec": 0, 00:08:23.560 "rw_mbytes_per_sec": 0, 00:08:23.560 "r_mbytes_per_sec": 0, 00:08:23.560 "w_mbytes_per_sec": 0 00:08:23.560 }, 00:08:23.560 "claimed": false, 00:08:23.560 "zoned": false, 00:08:23.560 "supported_io_types": { 00:08:23.560 "read": true, 00:08:23.560 "write": true, 00:08:23.560 "unmap": true, 00:08:23.560 "flush": true, 00:08:23.560 "reset": true, 00:08:23.560 "nvme_admin": false, 00:08:23.560 "nvme_io": false, 00:08:23.560 "nvme_io_md": false, 00:08:23.560 "write_zeroes": true, 00:08:23.560 "zcopy": true, 00:08:23.560 "get_zone_info": false, 00:08:23.560 "zone_management": false, 00:08:23.560 "zone_append": false, 00:08:23.560 "compare": false, 00:08:23.560 "compare_and_write": false, 00:08:23.560 "abort": true, 00:08:23.560 "seek_hole": false, 00:08:23.560 "seek_data": false, 00:08:23.560 "copy": true, 00:08:23.560 "nvme_iov_md": false 00:08:23.560 }, 00:08:23.560 "memory_domains": [ 00:08:23.560 { 00:08:23.560 "dma_device_id": "system", 00:08:23.560 "dma_device_type": 1 00:08:23.560 }, 00:08:23.560 { 00:08:23.560 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:23.560 "dma_device_type": 2 00:08:23.560 } 00:08:23.560 ], 00:08:23.560 "driver_specific": {} 00:08:23.560 } 00:08:23.560 ] 00:08:23.560 13:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.560 13:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:23.560 13:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:23.560 13:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:23.560 13:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:23.560 13:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.560 13:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.560 [2024-11-17 13:18:12.680426] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:23.560 [2024-11-17 13:18:12.680551] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:23.560 [2024-11-17 13:18:12.680636] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:23.560 [2024-11-17 13:18:12.682812] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:23.560 13:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.560 13:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:23.560 13:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:23.560 13:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:23.561 13:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:23.561 13:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:23.561 13:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:23.561 13:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:23.561 13:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:23.561 13:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:23.561 13:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:23.561 13:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:23.561 13:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:23.561 13:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.561 13:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.561 13:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.561 13:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:23.561 "name": "Existed_Raid", 00:08:23.561 "uuid": "94e3c797-9cf2-4ca8-bdf6-d0835ad2bbbe", 00:08:23.561 "strip_size_kb": 64, 00:08:23.561 "state": "configuring", 00:08:23.561 "raid_level": "raid0", 00:08:23.561 "superblock": true, 00:08:23.561 "num_base_bdevs": 3, 00:08:23.561 "num_base_bdevs_discovered": 2, 00:08:23.561 "num_base_bdevs_operational": 3, 00:08:23.561 "base_bdevs_list": [ 00:08:23.561 { 00:08:23.561 "name": "BaseBdev1", 00:08:23.561 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:23.561 "is_configured": false, 00:08:23.561 "data_offset": 0, 00:08:23.561 "data_size": 0 00:08:23.561 }, 00:08:23.561 { 00:08:23.561 "name": "BaseBdev2", 00:08:23.561 "uuid": "506433d8-f5b7-49d9-b772-a88c582c515d", 00:08:23.561 "is_configured": true, 00:08:23.561 "data_offset": 2048, 00:08:23.561 "data_size": 63488 00:08:23.561 }, 00:08:23.561 { 00:08:23.561 "name": "BaseBdev3", 00:08:23.561 "uuid": "47342608-d423-415b-9bcd-2e3b5d7d65e2", 00:08:23.561 "is_configured": true, 00:08:23.561 "data_offset": 2048, 00:08:23.561 "data_size": 63488 00:08:23.561 } 00:08:23.561 ] 00:08:23.561 }' 00:08:23.561 13:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:23.561 13:18:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:24.130 13:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:24.130 13:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.130 13:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:24.130 [2024-11-17 13:18:13.155608] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:24.130 13:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.130 13:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:24.130 13:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:24.130 13:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:24.130 13:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:24.130 13:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:24.130 13:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:24.130 13:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:24.130 13:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:24.130 13:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:24.130 13:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:24.130 13:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:24.130 13:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:24.131 13:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.131 13:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:24.131 13:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.131 13:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:24.131 "name": "Existed_Raid", 00:08:24.131 "uuid": "94e3c797-9cf2-4ca8-bdf6-d0835ad2bbbe", 00:08:24.131 "strip_size_kb": 64, 00:08:24.131 "state": "configuring", 00:08:24.131 "raid_level": "raid0", 00:08:24.131 "superblock": true, 00:08:24.131 "num_base_bdevs": 3, 00:08:24.131 "num_base_bdevs_discovered": 1, 00:08:24.131 "num_base_bdevs_operational": 3, 00:08:24.131 "base_bdevs_list": [ 00:08:24.131 { 00:08:24.131 "name": "BaseBdev1", 00:08:24.131 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:24.131 "is_configured": false, 00:08:24.131 "data_offset": 0, 00:08:24.131 "data_size": 0 00:08:24.131 }, 00:08:24.131 { 00:08:24.131 "name": null, 00:08:24.131 "uuid": "506433d8-f5b7-49d9-b772-a88c582c515d", 00:08:24.131 "is_configured": false, 00:08:24.131 "data_offset": 0, 00:08:24.131 "data_size": 63488 00:08:24.131 }, 00:08:24.131 { 00:08:24.131 "name": "BaseBdev3", 00:08:24.131 "uuid": "47342608-d423-415b-9bcd-2e3b5d7d65e2", 00:08:24.131 "is_configured": true, 00:08:24.131 "data_offset": 2048, 00:08:24.131 "data_size": 63488 00:08:24.131 } 00:08:24.131 ] 00:08:24.131 }' 00:08:24.131 13:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:24.131 13:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:24.700 13:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:24.700 13:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.700 13:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:24.700 13:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:24.700 13:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.701 13:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:24.701 13:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:24.701 13:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.701 13:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:24.701 [2024-11-17 13:18:13.714355] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:24.701 BaseBdev1 00:08:24.701 13:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.701 13:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:24.701 13:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:24.701 13:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:24.701 13:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:24.701 13:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:24.701 13:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:24.701 13:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:24.701 13:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.701 13:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:24.701 13:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.701 13:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:24.701 13:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.701 13:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:24.701 [ 00:08:24.701 { 00:08:24.701 "name": "BaseBdev1", 00:08:24.701 "aliases": [ 00:08:24.701 "a560a669-901a-4888-bc57-4c1bb726ba6f" 00:08:24.701 ], 00:08:24.701 "product_name": "Malloc disk", 00:08:24.701 "block_size": 512, 00:08:24.701 "num_blocks": 65536, 00:08:24.701 "uuid": "a560a669-901a-4888-bc57-4c1bb726ba6f", 00:08:24.701 "assigned_rate_limits": { 00:08:24.701 "rw_ios_per_sec": 0, 00:08:24.701 "rw_mbytes_per_sec": 0, 00:08:24.701 "r_mbytes_per_sec": 0, 00:08:24.701 "w_mbytes_per_sec": 0 00:08:24.701 }, 00:08:24.701 "claimed": true, 00:08:24.701 "claim_type": "exclusive_write", 00:08:24.701 "zoned": false, 00:08:24.701 "supported_io_types": { 00:08:24.701 "read": true, 00:08:24.701 "write": true, 00:08:24.701 "unmap": true, 00:08:24.701 "flush": true, 00:08:24.701 "reset": true, 00:08:24.701 "nvme_admin": false, 00:08:24.701 "nvme_io": false, 00:08:24.701 "nvme_io_md": false, 00:08:24.701 "write_zeroes": true, 00:08:24.701 "zcopy": true, 00:08:24.701 "get_zone_info": false, 00:08:24.701 "zone_management": false, 00:08:24.701 "zone_append": false, 00:08:24.701 "compare": false, 00:08:24.701 "compare_and_write": false, 00:08:24.701 "abort": true, 00:08:24.701 "seek_hole": false, 00:08:24.701 "seek_data": false, 00:08:24.701 "copy": true, 00:08:24.701 "nvme_iov_md": false 00:08:24.701 }, 00:08:24.701 "memory_domains": [ 00:08:24.701 { 00:08:24.701 "dma_device_id": "system", 00:08:24.701 "dma_device_type": 1 00:08:24.701 }, 00:08:24.701 { 00:08:24.701 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:24.701 "dma_device_type": 2 00:08:24.701 } 00:08:24.701 ], 00:08:24.701 "driver_specific": {} 00:08:24.701 } 00:08:24.701 ] 00:08:24.701 13:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.701 13:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:24.701 13:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:24.701 13:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:24.701 13:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:24.701 13:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:24.701 13:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:24.701 13:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:24.701 13:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:24.701 13:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:24.701 13:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:24.701 13:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:24.701 13:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:24.701 13:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:24.701 13:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.701 13:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:24.701 13:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.701 13:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:24.701 "name": "Existed_Raid", 00:08:24.701 "uuid": "94e3c797-9cf2-4ca8-bdf6-d0835ad2bbbe", 00:08:24.701 "strip_size_kb": 64, 00:08:24.701 "state": "configuring", 00:08:24.701 "raid_level": "raid0", 00:08:24.701 "superblock": true, 00:08:24.701 "num_base_bdevs": 3, 00:08:24.701 "num_base_bdevs_discovered": 2, 00:08:24.701 "num_base_bdevs_operational": 3, 00:08:24.701 "base_bdevs_list": [ 00:08:24.701 { 00:08:24.701 "name": "BaseBdev1", 00:08:24.701 "uuid": "a560a669-901a-4888-bc57-4c1bb726ba6f", 00:08:24.701 "is_configured": true, 00:08:24.701 "data_offset": 2048, 00:08:24.701 "data_size": 63488 00:08:24.701 }, 00:08:24.701 { 00:08:24.701 "name": null, 00:08:24.701 "uuid": "506433d8-f5b7-49d9-b772-a88c582c515d", 00:08:24.701 "is_configured": false, 00:08:24.701 "data_offset": 0, 00:08:24.701 "data_size": 63488 00:08:24.701 }, 00:08:24.701 { 00:08:24.701 "name": "BaseBdev3", 00:08:24.701 "uuid": "47342608-d423-415b-9bcd-2e3b5d7d65e2", 00:08:24.701 "is_configured": true, 00:08:24.701 "data_offset": 2048, 00:08:24.701 "data_size": 63488 00:08:24.701 } 00:08:24.701 ] 00:08:24.701 }' 00:08:24.701 13:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:24.701 13:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:24.961 13:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:24.961 13:18:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.961 13:18:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:24.961 13:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:25.221 13:18:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.221 13:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:25.221 13:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:25.221 13:18:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.221 13:18:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:25.221 [2024-11-17 13:18:14.205579] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:25.221 13:18:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.221 13:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:25.221 13:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:25.221 13:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:25.221 13:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:25.221 13:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:25.221 13:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:25.221 13:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:25.221 13:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:25.221 13:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:25.221 13:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:25.221 13:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:25.221 13:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:25.221 13:18:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.221 13:18:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:25.221 13:18:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.221 13:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:25.221 "name": "Existed_Raid", 00:08:25.221 "uuid": "94e3c797-9cf2-4ca8-bdf6-d0835ad2bbbe", 00:08:25.221 "strip_size_kb": 64, 00:08:25.221 "state": "configuring", 00:08:25.221 "raid_level": "raid0", 00:08:25.221 "superblock": true, 00:08:25.221 "num_base_bdevs": 3, 00:08:25.221 "num_base_bdevs_discovered": 1, 00:08:25.221 "num_base_bdevs_operational": 3, 00:08:25.221 "base_bdevs_list": [ 00:08:25.221 { 00:08:25.221 "name": "BaseBdev1", 00:08:25.221 "uuid": "a560a669-901a-4888-bc57-4c1bb726ba6f", 00:08:25.221 "is_configured": true, 00:08:25.221 "data_offset": 2048, 00:08:25.221 "data_size": 63488 00:08:25.221 }, 00:08:25.221 { 00:08:25.221 "name": null, 00:08:25.221 "uuid": "506433d8-f5b7-49d9-b772-a88c582c515d", 00:08:25.221 "is_configured": false, 00:08:25.221 "data_offset": 0, 00:08:25.221 "data_size": 63488 00:08:25.221 }, 00:08:25.221 { 00:08:25.221 "name": null, 00:08:25.221 "uuid": "47342608-d423-415b-9bcd-2e3b5d7d65e2", 00:08:25.221 "is_configured": false, 00:08:25.221 "data_offset": 0, 00:08:25.221 "data_size": 63488 00:08:25.221 } 00:08:25.221 ] 00:08:25.221 }' 00:08:25.221 13:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:25.221 13:18:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:25.481 13:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:25.481 13:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:25.481 13:18:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.481 13:18:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:25.481 13:18:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.481 13:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:25.481 13:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:25.481 13:18:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.481 13:18:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:25.481 [2024-11-17 13:18:14.684809] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:25.481 13:18:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.482 13:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:25.482 13:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:25.482 13:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:25.482 13:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:25.482 13:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:25.482 13:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:25.482 13:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:25.482 13:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:25.482 13:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:25.482 13:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:25.482 13:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:25.482 13:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:25.482 13:18:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.482 13:18:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:25.741 13:18:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.741 13:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:25.741 "name": "Existed_Raid", 00:08:25.741 "uuid": "94e3c797-9cf2-4ca8-bdf6-d0835ad2bbbe", 00:08:25.741 "strip_size_kb": 64, 00:08:25.741 "state": "configuring", 00:08:25.741 "raid_level": "raid0", 00:08:25.741 "superblock": true, 00:08:25.741 "num_base_bdevs": 3, 00:08:25.741 "num_base_bdevs_discovered": 2, 00:08:25.741 "num_base_bdevs_operational": 3, 00:08:25.741 "base_bdevs_list": [ 00:08:25.741 { 00:08:25.741 "name": "BaseBdev1", 00:08:25.741 "uuid": "a560a669-901a-4888-bc57-4c1bb726ba6f", 00:08:25.741 "is_configured": true, 00:08:25.741 "data_offset": 2048, 00:08:25.741 "data_size": 63488 00:08:25.741 }, 00:08:25.741 { 00:08:25.741 "name": null, 00:08:25.741 "uuid": "506433d8-f5b7-49d9-b772-a88c582c515d", 00:08:25.741 "is_configured": false, 00:08:25.741 "data_offset": 0, 00:08:25.741 "data_size": 63488 00:08:25.741 }, 00:08:25.741 { 00:08:25.741 "name": "BaseBdev3", 00:08:25.741 "uuid": "47342608-d423-415b-9bcd-2e3b5d7d65e2", 00:08:25.741 "is_configured": true, 00:08:25.741 "data_offset": 2048, 00:08:25.741 "data_size": 63488 00:08:25.741 } 00:08:25.741 ] 00:08:25.741 }' 00:08:25.741 13:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:25.741 13:18:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:26.000 13:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:26.000 13:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:26.000 13:18:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.000 13:18:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:26.000 13:18:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.000 13:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:26.000 13:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:26.000 13:18:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.000 13:18:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:26.000 [2024-11-17 13:18:15.148679] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:26.259 13:18:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.259 13:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:26.259 13:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:26.259 13:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:26.259 13:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:26.259 13:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:26.259 13:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:26.259 13:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:26.259 13:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:26.259 13:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:26.259 13:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:26.259 13:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:26.259 13:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:26.259 13:18:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.259 13:18:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:26.259 13:18:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.259 13:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:26.259 "name": "Existed_Raid", 00:08:26.259 "uuid": "94e3c797-9cf2-4ca8-bdf6-d0835ad2bbbe", 00:08:26.259 "strip_size_kb": 64, 00:08:26.259 "state": "configuring", 00:08:26.259 "raid_level": "raid0", 00:08:26.259 "superblock": true, 00:08:26.259 "num_base_bdevs": 3, 00:08:26.259 "num_base_bdevs_discovered": 1, 00:08:26.259 "num_base_bdevs_operational": 3, 00:08:26.259 "base_bdevs_list": [ 00:08:26.259 { 00:08:26.259 "name": null, 00:08:26.259 "uuid": "a560a669-901a-4888-bc57-4c1bb726ba6f", 00:08:26.260 "is_configured": false, 00:08:26.260 "data_offset": 0, 00:08:26.260 "data_size": 63488 00:08:26.260 }, 00:08:26.260 { 00:08:26.260 "name": null, 00:08:26.260 "uuid": "506433d8-f5b7-49d9-b772-a88c582c515d", 00:08:26.260 "is_configured": false, 00:08:26.260 "data_offset": 0, 00:08:26.260 "data_size": 63488 00:08:26.260 }, 00:08:26.260 { 00:08:26.260 "name": "BaseBdev3", 00:08:26.260 "uuid": "47342608-d423-415b-9bcd-2e3b5d7d65e2", 00:08:26.260 "is_configured": true, 00:08:26.260 "data_offset": 2048, 00:08:26.260 "data_size": 63488 00:08:26.260 } 00:08:26.260 ] 00:08:26.260 }' 00:08:26.260 13:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:26.260 13:18:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:26.519 13:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:26.519 13:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:26.519 13:18:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.519 13:18:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:26.519 13:18:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.519 13:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:26.519 13:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:26.519 13:18:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.519 13:18:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:26.519 [2024-11-17 13:18:15.694063] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:26.519 13:18:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.519 13:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:26.519 13:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:26.519 13:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:26.519 13:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:26.519 13:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:26.519 13:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:26.519 13:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:26.519 13:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:26.519 13:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:26.519 13:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:26.519 13:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:26.519 13:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:26.519 13:18:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.519 13:18:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:26.519 13:18:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.790 13:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:26.790 "name": "Existed_Raid", 00:08:26.790 "uuid": "94e3c797-9cf2-4ca8-bdf6-d0835ad2bbbe", 00:08:26.790 "strip_size_kb": 64, 00:08:26.790 "state": "configuring", 00:08:26.790 "raid_level": "raid0", 00:08:26.790 "superblock": true, 00:08:26.790 "num_base_bdevs": 3, 00:08:26.790 "num_base_bdevs_discovered": 2, 00:08:26.790 "num_base_bdevs_operational": 3, 00:08:26.790 "base_bdevs_list": [ 00:08:26.790 { 00:08:26.790 "name": null, 00:08:26.790 "uuid": "a560a669-901a-4888-bc57-4c1bb726ba6f", 00:08:26.790 "is_configured": false, 00:08:26.790 "data_offset": 0, 00:08:26.790 "data_size": 63488 00:08:26.790 }, 00:08:26.790 { 00:08:26.790 "name": "BaseBdev2", 00:08:26.790 "uuid": "506433d8-f5b7-49d9-b772-a88c582c515d", 00:08:26.790 "is_configured": true, 00:08:26.790 "data_offset": 2048, 00:08:26.790 "data_size": 63488 00:08:26.790 }, 00:08:26.790 { 00:08:26.790 "name": "BaseBdev3", 00:08:26.790 "uuid": "47342608-d423-415b-9bcd-2e3b5d7d65e2", 00:08:26.790 "is_configured": true, 00:08:26.790 "data_offset": 2048, 00:08:26.790 "data_size": 63488 00:08:26.790 } 00:08:26.790 ] 00:08:26.790 }' 00:08:26.790 13:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:26.790 13:18:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:27.065 13:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:27.065 13:18:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.065 13:18:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:27.065 13:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:27.065 13:18:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.065 13:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:08:27.065 13:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:27.065 13:18:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.065 13:18:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:27.065 13:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:27.065 13:18:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.065 13:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u a560a669-901a-4888-bc57-4c1bb726ba6f 00:08:27.065 13:18:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.065 13:18:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:27.065 [2024-11-17 13:18:16.250663] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:27.065 [2024-11-17 13:18:16.251056] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:27.065 [2024-11-17 13:18:16.251121] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:27.065 [2024-11-17 13:18:16.251480] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:27.065 NewBaseBdev 00:08:27.065 [2024-11-17 13:18:16.251705] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:27.065 [2024-11-17 13:18:16.251718] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:08:27.065 [2024-11-17 13:18:16.251872] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:27.065 13:18:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.065 13:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:08:27.065 13:18:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:08:27.065 13:18:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:27.065 13:18:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:27.065 13:18:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:27.065 13:18:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:27.065 13:18:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:27.065 13:18:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.065 13:18:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:27.065 13:18:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.065 13:18:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:27.065 13:18:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.065 13:18:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:27.065 [ 00:08:27.065 { 00:08:27.065 "name": "NewBaseBdev", 00:08:27.065 "aliases": [ 00:08:27.065 "a560a669-901a-4888-bc57-4c1bb726ba6f" 00:08:27.065 ], 00:08:27.065 "product_name": "Malloc disk", 00:08:27.065 "block_size": 512, 00:08:27.065 "num_blocks": 65536, 00:08:27.065 "uuid": "a560a669-901a-4888-bc57-4c1bb726ba6f", 00:08:27.065 "assigned_rate_limits": { 00:08:27.065 "rw_ios_per_sec": 0, 00:08:27.065 "rw_mbytes_per_sec": 0, 00:08:27.065 "r_mbytes_per_sec": 0, 00:08:27.065 "w_mbytes_per_sec": 0 00:08:27.065 }, 00:08:27.065 "claimed": true, 00:08:27.065 "claim_type": "exclusive_write", 00:08:27.065 "zoned": false, 00:08:27.065 "supported_io_types": { 00:08:27.066 "read": true, 00:08:27.066 "write": true, 00:08:27.066 "unmap": true, 00:08:27.066 "flush": true, 00:08:27.066 "reset": true, 00:08:27.066 "nvme_admin": false, 00:08:27.066 "nvme_io": false, 00:08:27.066 "nvme_io_md": false, 00:08:27.066 "write_zeroes": true, 00:08:27.066 "zcopy": true, 00:08:27.066 "get_zone_info": false, 00:08:27.066 "zone_management": false, 00:08:27.066 "zone_append": false, 00:08:27.066 "compare": false, 00:08:27.066 "compare_and_write": false, 00:08:27.066 "abort": true, 00:08:27.066 "seek_hole": false, 00:08:27.066 "seek_data": false, 00:08:27.066 "copy": true, 00:08:27.066 "nvme_iov_md": false 00:08:27.066 }, 00:08:27.066 "memory_domains": [ 00:08:27.066 { 00:08:27.066 "dma_device_id": "system", 00:08:27.066 "dma_device_type": 1 00:08:27.066 }, 00:08:27.066 { 00:08:27.066 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:27.066 "dma_device_type": 2 00:08:27.066 } 00:08:27.066 ], 00:08:27.066 "driver_specific": {} 00:08:27.066 } 00:08:27.066 ] 00:08:27.066 13:18:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.066 13:18:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:27.325 13:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:27.325 13:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:27.325 13:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:27.325 13:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:27.325 13:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:27.325 13:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:27.325 13:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:27.325 13:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:27.325 13:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:27.325 13:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:27.326 13:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:27.326 13:18:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.326 13:18:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:27.326 13:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:27.326 13:18:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.326 13:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:27.326 "name": "Existed_Raid", 00:08:27.326 "uuid": "94e3c797-9cf2-4ca8-bdf6-d0835ad2bbbe", 00:08:27.326 "strip_size_kb": 64, 00:08:27.326 "state": "online", 00:08:27.326 "raid_level": "raid0", 00:08:27.326 "superblock": true, 00:08:27.326 "num_base_bdevs": 3, 00:08:27.326 "num_base_bdevs_discovered": 3, 00:08:27.326 "num_base_bdevs_operational": 3, 00:08:27.326 "base_bdevs_list": [ 00:08:27.326 { 00:08:27.326 "name": "NewBaseBdev", 00:08:27.326 "uuid": "a560a669-901a-4888-bc57-4c1bb726ba6f", 00:08:27.326 "is_configured": true, 00:08:27.326 "data_offset": 2048, 00:08:27.326 "data_size": 63488 00:08:27.326 }, 00:08:27.326 { 00:08:27.326 "name": "BaseBdev2", 00:08:27.326 "uuid": "506433d8-f5b7-49d9-b772-a88c582c515d", 00:08:27.326 "is_configured": true, 00:08:27.326 "data_offset": 2048, 00:08:27.326 "data_size": 63488 00:08:27.326 }, 00:08:27.326 { 00:08:27.326 "name": "BaseBdev3", 00:08:27.326 "uuid": "47342608-d423-415b-9bcd-2e3b5d7d65e2", 00:08:27.326 "is_configured": true, 00:08:27.326 "data_offset": 2048, 00:08:27.326 "data_size": 63488 00:08:27.326 } 00:08:27.326 ] 00:08:27.326 }' 00:08:27.326 13:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:27.326 13:18:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:27.586 13:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:08:27.586 13:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:27.586 13:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:27.586 13:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:27.586 13:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:27.586 13:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:27.586 13:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:27.586 13:18:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.586 13:18:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:27.586 13:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:27.586 [2024-11-17 13:18:16.726328] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:27.586 13:18:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.586 13:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:27.586 "name": "Existed_Raid", 00:08:27.586 "aliases": [ 00:08:27.586 "94e3c797-9cf2-4ca8-bdf6-d0835ad2bbbe" 00:08:27.586 ], 00:08:27.586 "product_name": "Raid Volume", 00:08:27.586 "block_size": 512, 00:08:27.586 "num_blocks": 190464, 00:08:27.586 "uuid": "94e3c797-9cf2-4ca8-bdf6-d0835ad2bbbe", 00:08:27.586 "assigned_rate_limits": { 00:08:27.586 "rw_ios_per_sec": 0, 00:08:27.586 "rw_mbytes_per_sec": 0, 00:08:27.586 "r_mbytes_per_sec": 0, 00:08:27.586 "w_mbytes_per_sec": 0 00:08:27.586 }, 00:08:27.586 "claimed": false, 00:08:27.586 "zoned": false, 00:08:27.586 "supported_io_types": { 00:08:27.586 "read": true, 00:08:27.586 "write": true, 00:08:27.586 "unmap": true, 00:08:27.586 "flush": true, 00:08:27.586 "reset": true, 00:08:27.586 "nvme_admin": false, 00:08:27.586 "nvme_io": false, 00:08:27.586 "nvme_io_md": false, 00:08:27.586 "write_zeroes": true, 00:08:27.586 "zcopy": false, 00:08:27.586 "get_zone_info": false, 00:08:27.586 "zone_management": false, 00:08:27.586 "zone_append": false, 00:08:27.586 "compare": false, 00:08:27.586 "compare_and_write": false, 00:08:27.586 "abort": false, 00:08:27.586 "seek_hole": false, 00:08:27.586 "seek_data": false, 00:08:27.586 "copy": false, 00:08:27.586 "nvme_iov_md": false 00:08:27.586 }, 00:08:27.586 "memory_domains": [ 00:08:27.586 { 00:08:27.586 "dma_device_id": "system", 00:08:27.586 "dma_device_type": 1 00:08:27.586 }, 00:08:27.586 { 00:08:27.586 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:27.586 "dma_device_type": 2 00:08:27.586 }, 00:08:27.586 { 00:08:27.586 "dma_device_id": "system", 00:08:27.586 "dma_device_type": 1 00:08:27.586 }, 00:08:27.586 { 00:08:27.586 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:27.586 "dma_device_type": 2 00:08:27.586 }, 00:08:27.586 { 00:08:27.586 "dma_device_id": "system", 00:08:27.586 "dma_device_type": 1 00:08:27.586 }, 00:08:27.586 { 00:08:27.586 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:27.586 "dma_device_type": 2 00:08:27.586 } 00:08:27.586 ], 00:08:27.586 "driver_specific": { 00:08:27.586 "raid": { 00:08:27.586 "uuid": "94e3c797-9cf2-4ca8-bdf6-d0835ad2bbbe", 00:08:27.586 "strip_size_kb": 64, 00:08:27.586 "state": "online", 00:08:27.586 "raid_level": "raid0", 00:08:27.586 "superblock": true, 00:08:27.586 "num_base_bdevs": 3, 00:08:27.586 "num_base_bdevs_discovered": 3, 00:08:27.586 "num_base_bdevs_operational": 3, 00:08:27.586 "base_bdevs_list": [ 00:08:27.586 { 00:08:27.586 "name": "NewBaseBdev", 00:08:27.586 "uuid": "a560a669-901a-4888-bc57-4c1bb726ba6f", 00:08:27.586 "is_configured": true, 00:08:27.586 "data_offset": 2048, 00:08:27.586 "data_size": 63488 00:08:27.586 }, 00:08:27.586 { 00:08:27.586 "name": "BaseBdev2", 00:08:27.586 "uuid": "506433d8-f5b7-49d9-b772-a88c582c515d", 00:08:27.586 "is_configured": true, 00:08:27.586 "data_offset": 2048, 00:08:27.586 "data_size": 63488 00:08:27.586 }, 00:08:27.586 { 00:08:27.586 "name": "BaseBdev3", 00:08:27.586 "uuid": "47342608-d423-415b-9bcd-2e3b5d7d65e2", 00:08:27.586 "is_configured": true, 00:08:27.586 "data_offset": 2048, 00:08:27.586 "data_size": 63488 00:08:27.586 } 00:08:27.586 ] 00:08:27.586 } 00:08:27.586 } 00:08:27.586 }' 00:08:27.586 13:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:27.586 13:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:08:27.586 BaseBdev2 00:08:27.586 BaseBdev3' 00:08:27.586 13:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:27.846 13:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:27.846 13:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:27.846 13:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:27.846 13:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:08:27.846 13:18:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.846 13:18:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:27.846 13:18:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.846 13:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:27.846 13:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:27.846 13:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:27.846 13:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:27.846 13:18:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.846 13:18:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:27.846 13:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:27.846 13:18:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.846 13:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:27.846 13:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:27.846 13:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:27.847 13:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:27.847 13:18:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.847 13:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:27.847 13:18:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:27.847 13:18:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.847 13:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:27.847 13:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:27.847 13:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:27.847 13:18:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.847 13:18:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:27.847 [2024-11-17 13:18:16.965612] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:27.847 [2024-11-17 13:18:16.965649] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:27.847 [2024-11-17 13:18:16.965758] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:27.847 [2024-11-17 13:18:16.965819] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:27.847 [2024-11-17 13:18:16.965832] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:08:27.847 13:18:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.847 13:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 64379 00:08:27.847 13:18:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 64379 ']' 00:08:27.847 13:18:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 64379 00:08:27.847 13:18:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:08:27.847 13:18:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:27.847 13:18:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64379 00:08:27.847 killing process with pid 64379 00:08:27.847 13:18:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:27.847 13:18:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:27.847 13:18:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64379' 00:08:27.847 13:18:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 64379 00:08:27.847 [2024-11-17 13:18:17.008743] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:27.847 13:18:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 64379 00:08:28.415 [2024-11-17 13:18:17.348091] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:29.353 13:18:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:08:29.353 00:08:29.353 real 0m10.539s 00:08:29.353 user 0m16.652s 00:08:29.353 sys 0m1.729s 00:08:29.353 13:18:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:29.353 13:18:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:29.353 ************************************ 00:08:29.353 END TEST raid_state_function_test_sb 00:08:29.353 ************************************ 00:08:29.612 13:18:18 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:08:29.612 13:18:18 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:29.612 13:18:18 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:29.612 13:18:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:29.612 ************************************ 00:08:29.612 START TEST raid_superblock_test 00:08:29.612 ************************************ 00:08:29.612 13:18:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 3 00:08:29.612 13:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:08:29.612 13:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:08:29.612 13:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:08:29.612 13:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:08:29.612 13:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:08:29.612 13:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:08:29.612 13:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:08:29.612 13:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:08:29.612 13:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:08:29.612 13:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:08:29.612 13:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:08:29.612 13:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:08:29.612 13:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:08:29.612 13:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:08:29.612 13:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:08:29.612 13:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:08:29.612 13:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=65005 00:08:29.612 13:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 65005 00:08:29.612 13:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:08:29.612 13:18:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 65005 ']' 00:08:29.612 13:18:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:29.612 13:18:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:29.612 13:18:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:29.612 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:29.612 13:18:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:29.612 13:18:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.612 [2024-11-17 13:18:18.697476] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:08:29.612 [2024-11-17 13:18:18.697700] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65005 ] 00:08:29.871 [2024-11-17 13:18:18.871408] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:29.871 [2024-11-17 13:18:18.994912] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:30.130 [2024-11-17 13:18:19.214968] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:30.130 [2024-11-17 13:18:19.215133] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:30.389 13:18:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:30.389 13:18:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:08:30.390 13:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:08:30.390 13:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:30.390 13:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:08:30.390 13:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:08:30.390 13:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:30.390 13:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:30.390 13:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:30.390 13:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:30.390 13:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:08:30.390 13:18:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.390 13:18:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.648 malloc1 00:08:30.648 13:18:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.648 13:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:30.648 13:18:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.648 13:18:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.648 [2024-11-17 13:18:19.657022] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:30.648 [2024-11-17 13:18:19.657161] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:30.648 [2024-11-17 13:18:19.657261] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:30.648 [2024-11-17 13:18:19.657316] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:30.648 [2024-11-17 13:18:19.659803] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:30.648 [2024-11-17 13:18:19.659881] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:30.648 pt1 00:08:30.648 13:18:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.648 13:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:30.648 13:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:30.648 13:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:08:30.648 13:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:08:30.648 13:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:30.648 13:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:30.648 13:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:30.648 13:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:30.648 13:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:08:30.648 13:18:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.648 13:18:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.648 malloc2 00:08:30.648 13:18:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.648 13:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:30.648 13:18:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.648 13:18:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.648 [2024-11-17 13:18:19.716363] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:30.648 [2024-11-17 13:18:19.716468] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:30.648 [2024-11-17 13:18:19.716510] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:08:30.648 [2024-11-17 13:18:19.716522] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:30.648 [2024-11-17 13:18:19.718890] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:30.648 [2024-11-17 13:18:19.718931] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:30.648 pt2 00:08:30.648 13:18:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.648 13:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:30.648 13:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:30.649 13:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:08:30.649 13:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:08:30.649 13:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:08:30.649 13:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:30.649 13:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:30.649 13:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:30.649 13:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:08:30.649 13:18:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.649 13:18:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.649 malloc3 00:08:30.649 13:18:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.649 13:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:30.649 13:18:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.649 13:18:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.649 [2024-11-17 13:18:19.786919] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:30.649 [2024-11-17 13:18:19.787033] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:30.649 [2024-11-17 13:18:19.787077] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:08:30.649 [2024-11-17 13:18:19.787111] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:30.649 [2024-11-17 13:18:19.789451] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:30.649 [2024-11-17 13:18:19.789533] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:30.649 pt3 00:08:30.649 13:18:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.649 13:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:30.649 13:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:30.649 13:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:08:30.649 13:18:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.649 13:18:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.649 [2024-11-17 13:18:19.798958] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:30.649 [2024-11-17 13:18:19.801057] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:30.649 [2024-11-17 13:18:19.801178] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:30.649 [2024-11-17 13:18:19.801407] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:30.649 [2024-11-17 13:18:19.801465] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:30.649 [2024-11-17 13:18:19.801789] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:30.649 [2024-11-17 13:18:19.802036] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:30.649 [2024-11-17 13:18:19.802086] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:08:30.649 [2024-11-17 13:18:19.802351] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:30.649 13:18:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.649 13:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:30.649 13:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:30.649 13:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:30.649 13:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:30.649 13:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:30.649 13:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:30.649 13:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:30.649 13:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:30.649 13:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:30.649 13:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:30.649 13:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:30.649 13:18:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.649 13:18:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.649 13:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:30.649 13:18:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.649 13:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:30.649 "name": "raid_bdev1", 00:08:30.649 "uuid": "0e96a99d-f927-4350-970a-3da13c23c550", 00:08:30.649 "strip_size_kb": 64, 00:08:30.649 "state": "online", 00:08:30.649 "raid_level": "raid0", 00:08:30.649 "superblock": true, 00:08:30.649 "num_base_bdevs": 3, 00:08:30.649 "num_base_bdevs_discovered": 3, 00:08:30.649 "num_base_bdevs_operational": 3, 00:08:30.649 "base_bdevs_list": [ 00:08:30.649 { 00:08:30.649 "name": "pt1", 00:08:30.649 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:30.649 "is_configured": true, 00:08:30.649 "data_offset": 2048, 00:08:30.649 "data_size": 63488 00:08:30.649 }, 00:08:30.649 { 00:08:30.649 "name": "pt2", 00:08:30.649 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:30.649 "is_configured": true, 00:08:30.649 "data_offset": 2048, 00:08:30.649 "data_size": 63488 00:08:30.649 }, 00:08:30.649 { 00:08:30.649 "name": "pt3", 00:08:30.649 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:30.649 "is_configured": true, 00:08:30.649 "data_offset": 2048, 00:08:30.649 "data_size": 63488 00:08:30.649 } 00:08:30.649 ] 00:08:30.649 }' 00:08:30.649 13:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:30.649 13:18:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.217 13:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:08:31.217 13:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:31.217 13:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:31.217 13:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:31.217 13:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:31.217 13:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:31.217 13:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:31.217 13:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:31.217 13:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.217 13:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.217 [2024-11-17 13:18:20.214606] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:31.217 13:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.217 13:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:31.217 "name": "raid_bdev1", 00:08:31.217 "aliases": [ 00:08:31.217 "0e96a99d-f927-4350-970a-3da13c23c550" 00:08:31.217 ], 00:08:31.217 "product_name": "Raid Volume", 00:08:31.217 "block_size": 512, 00:08:31.217 "num_blocks": 190464, 00:08:31.217 "uuid": "0e96a99d-f927-4350-970a-3da13c23c550", 00:08:31.217 "assigned_rate_limits": { 00:08:31.217 "rw_ios_per_sec": 0, 00:08:31.217 "rw_mbytes_per_sec": 0, 00:08:31.217 "r_mbytes_per_sec": 0, 00:08:31.217 "w_mbytes_per_sec": 0 00:08:31.217 }, 00:08:31.217 "claimed": false, 00:08:31.217 "zoned": false, 00:08:31.217 "supported_io_types": { 00:08:31.217 "read": true, 00:08:31.217 "write": true, 00:08:31.217 "unmap": true, 00:08:31.217 "flush": true, 00:08:31.217 "reset": true, 00:08:31.217 "nvme_admin": false, 00:08:31.217 "nvme_io": false, 00:08:31.217 "nvme_io_md": false, 00:08:31.217 "write_zeroes": true, 00:08:31.217 "zcopy": false, 00:08:31.217 "get_zone_info": false, 00:08:31.217 "zone_management": false, 00:08:31.217 "zone_append": false, 00:08:31.217 "compare": false, 00:08:31.217 "compare_and_write": false, 00:08:31.217 "abort": false, 00:08:31.217 "seek_hole": false, 00:08:31.217 "seek_data": false, 00:08:31.217 "copy": false, 00:08:31.217 "nvme_iov_md": false 00:08:31.217 }, 00:08:31.217 "memory_domains": [ 00:08:31.217 { 00:08:31.217 "dma_device_id": "system", 00:08:31.217 "dma_device_type": 1 00:08:31.217 }, 00:08:31.217 { 00:08:31.217 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:31.217 "dma_device_type": 2 00:08:31.217 }, 00:08:31.217 { 00:08:31.217 "dma_device_id": "system", 00:08:31.217 "dma_device_type": 1 00:08:31.217 }, 00:08:31.217 { 00:08:31.217 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:31.217 "dma_device_type": 2 00:08:31.217 }, 00:08:31.217 { 00:08:31.217 "dma_device_id": "system", 00:08:31.217 "dma_device_type": 1 00:08:31.217 }, 00:08:31.217 { 00:08:31.217 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:31.217 "dma_device_type": 2 00:08:31.217 } 00:08:31.217 ], 00:08:31.217 "driver_specific": { 00:08:31.217 "raid": { 00:08:31.217 "uuid": "0e96a99d-f927-4350-970a-3da13c23c550", 00:08:31.217 "strip_size_kb": 64, 00:08:31.217 "state": "online", 00:08:31.217 "raid_level": "raid0", 00:08:31.217 "superblock": true, 00:08:31.217 "num_base_bdevs": 3, 00:08:31.217 "num_base_bdevs_discovered": 3, 00:08:31.217 "num_base_bdevs_operational": 3, 00:08:31.217 "base_bdevs_list": [ 00:08:31.217 { 00:08:31.217 "name": "pt1", 00:08:31.217 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:31.217 "is_configured": true, 00:08:31.217 "data_offset": 2048, 00:08:31.217 "data_size": 63488 00:08:31.217 }, 00:08:31.217 { 00:08:31.217 "name": "pt2", 00:08:31.217 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:31.217 "is_configured": true, 00:08:31.217 "data_offset": 2048, 00:08:31.217 "data_size": 63488 00:08:31.217 }, 00:08:31.217 { 00:08:31.217 "name": "pt3", 00:08:31.217 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:31.217 "is_configured": true, 00:08:31.217 "data_offset": 2048, 00:08:31.217 "data_size": 63488 00:08:31.217 } 00:08:31.217 ] 00:08:31.217 } 00:08:31.217 } 00:08:31.217 }' 00:08:31.217 13:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:31.217 13:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:31.217 pt2 00:08:31.217 pt3' 00:08:31.217 13:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:31.217 13:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:31.217 13:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:31.217 13:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:31.217 13:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:31.217 13:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.217 13:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.217 13:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.217 13:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:31.217 13:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:31.217 13:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:31.217 13:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:31.217 13:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:31.217 13:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.217 13:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.477 13:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.477 13:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:31.477 13:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:31.477 13:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:31.477 13:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:08:31.477 13:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.477 13:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.477 13:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:31.477 13:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.477 13:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:31.477 13:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:31.477 13:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:31.477 13:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.477 13:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.477 13:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:08:31.477 [2024-11-17 13:18:20.514036] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:31.477 13:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.477 13:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=0e96a99d-f927-4350-970a-3da13c23c550 00:08:31.477 13:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 0e96a99d-f927-4350-970a-3da13c23c550 ']' 00:08:31.477 13:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:31.477 13:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.477 13:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.477 [2024-11-17 13:18:20.565634] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:31.477 [2024-11-17 13:18:20.565739] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:31.477 [2024-11-17 13:18:20.565848] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:31.477 [2024-11-17 13:18:20.565937] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:31.477 [2024-11-17 13:18:20.565949] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:08:31.477 13:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.477 13:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:08:31.477 13:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:31.477 13:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.477 13:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.477 13:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.477 13:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:08:31.477 13:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:08:31.477 13:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:31.477 13:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:08:31.477 13:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.477 13:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.477 13:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.477 13:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:31.477 13:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:08:31.477 13:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.477 13:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.477 13:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.477 13:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:31.477 13:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:08:31.477 13:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.477 13:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.477 13:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.477 13:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:08:31.477 13:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.477 13:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.477 13:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:31.477 13:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.477 13:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:08:31.477 13:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:31.477 13:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:08:31.477 13:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:31.477 13:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:08:31.477 13:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:31.477 13:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:08:31.477 13:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:31.477 13:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:31.477 13:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.477 13:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.737 [2024-11-17 13:18:20.705452] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:31.737 [2024-11-17 13:18:20.707557] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:31.737 [2024-11-17 13:18:20.707614] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:08:31.737 [2024-11-17 13:18:20.707666] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:08:31.737 [2024-11-17 13:18:20.707721] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:08:31.737 [2024-11-17 13:18:20.707740] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:08:31.737 [2024-11-17 13:18:20.707757] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:31.737 [2024-11-17 13:18:20.707768] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:08:31.737 request: 00:08:31.737 { 00:08:31.737 "name": "raid_bdev1", 00:08:31.737 "raid_level": "raid0", 00:08:31.737 "base_bdevs": [ 00:08:31.737 "malloc1", 00:08:31.737 "malloc2", 00:08:31.737 "malloc3" 00:08:31.737 ], 00:08:31.737 "strip_size_kb": 64, 00:08:31.737 "superblock": false, 00:08:31.737 "method": "bdev_raid_create", 00:08:31.737 "req_id": 1 00:08:31.737 } 00:08:31.737 Got JSON-RPC error response 00:08:31.737 response: 00:08:31.737 { 00:08:31.737 "code": -17, 00:08:31.737 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:31.737 } 00:08:31.737 13:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:31.737 13:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:08:31.737 13:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:31.737 13:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:31.737 13:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:31.737 13:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:31.737 13:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:08:31.737 13:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.737 13:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.737 13:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.737 13:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:08:31.737 13:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:08:31.737 13:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:31.737 13:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.737 13:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.737 [2024-11-17 13:18:20.773290] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:31.737 [2024-11-17 13:18:20.773424] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:31.737 [2024-11-17 13:18:20.773480] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:08:31.737 [2024-11-17 13:18:20.773535] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:31.737 [2024-11-17 13:18:20.776003] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:31.737 [2024-11-17 13:18:20.776085] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:31.737 [2024-11-17 13:18:20.776233] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:31.737 [2024-11-17 13:18:20.776346] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:31.737 pt1 00:08:31.737 13:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.737 13:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:08:31.737 13:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:31.737 13:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:31.737 13:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:31.737 13:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:31.737 13:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:31.737 13:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:31.737 13:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:31.737 13:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:31.737 13:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:31.737 13:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:31.737 13:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.737 13:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.737 13:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:31.737 13:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.737 13:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:31.737 "name": "raid_bdev1", 00:08:31.737 "uuid": "0e96a99d-f927-4350-970a-3da13c23c550", 00:08:31.737 "strip_size_kb": 64, 00:08:31.737 "state": "configuring", 00:08:31.737 "raid_level": "raid0", 00:08:31.737 "superblock": true, 00:08:31.737 "num_base_bdevs": 3, 00:08:31.737 "num_base_bdevs_discovered": 1, 00:08:31.737 "num_base_bdevs_operational": 3, 00:08:31.737 "base_bdevs_list": [ 00:08:31.737 { 00:08:31.737 "name": "pt1", 00:08:31.737 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:31.737 "is_configured": true, 00:08:31.737 "data_offset": 2048, 00:08:31.737 "data_size": 63488 00:08:31.737 }, 00:08:31.737 { 00:08:31.737 "name": null, 00:08:31.737 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:31.737 "is_configured": false, 00:08:31.737 "data_offset": 2048, 00:08:31.737 "data_size": 63488 00:08:31.737 }, 00:08:31.737 { 00:08:31.737 "name": null, 00:08:31.737 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:31.737 "is_configured": false, 00:08:31.737 "data_offset": 2048, 00:08:31.737 "data_size": 63488 00:08:31.737 } 00:08:31.737 ] 00:08:31.737 }' 00:08:31.737 13:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:31.737 13:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.996 13:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:08:31.996 13:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:31.996 13:18:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.996 13:18:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.996 [2024-11-17 13:18:21.216627] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:31.996 [2024-11-17 13:18:21.216712] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:31.996 [2024-11-17 13:18:21.216740] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:08:31.996 [2024-11-17 13:18:21.216752] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:31.996 [2024-11-17 13:18:21.217270] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:31.996 [2024-11-17 13:18:21.217296] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:31.996 [2024-11-17 13:18:21.217420] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:31.996 [2024-11-17 13:18:21.217519] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:32.315 pt2 00:08:32.315 13:18:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.315 13:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:08:32.315 13:18:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.315 13:18:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.315 [2024-11-17 13:18:21.228572] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:08:32.315 13:18:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.315 13:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:08:32.315 13:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:32.315 13:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:32.315 13:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:32.315 13:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:32.316 13:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:32.316 13:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:32.316 13:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:32.316 13:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:32.316 13:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:32.316 13:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:32.316 13:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:32.316 13:18:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.316 13:18:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.316 13:18:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.316 13:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:32.316 "name": "raid_bdev1", 00:08:32.316 "uuid": "0e96a99d-f927-4350-970a-3da13c23c550", 00:08:32.316 "strip_size_kb": 64, 00:08:32.316 "state": "configuring", 00:08:32.316 "raid_level": "raid0", 00:08:32.316 "superblock": true, 00:08:32.316 "num_base_bdevs": 3, 00:08:32.316 "num_base_bdevs_discovered": 1, 00:08:32.316 "num_base_bdevs_operational": 3, 00:08:32.316 "base_bdevs_list": [ 00:08:32.316 { 00:08:32.316 "name": "pt1", 00:08:32.316 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:32.316 "is_configured": true, 00:08:32.316 "data_offset": 2048, 00:08:32.316 "data_size": 63488 00:08:32.316 }, 00:08:32.316 { 00:08:32.316 "name": null, 00:08:32.316 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:32.316 "is_configured": false, 00:08:32.316 "data_offset": 0, 00:08:32.316 "data_size": 63488 00:08:32.316 }, 00:08:32.316 { 00:08:32.316 "name": null, 00:08:32.316 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:32.316 "is_configured": false, 00:08:32.316 "data_offset": 2048, 00:08:32.316 "data_size": 63488 00:08:32.316 } 00:08:32.316 ] 00:08:32.316 }' 00:08:32.316 13:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:32.316 13:18:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.576 13:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:08:32.576 13:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:32.576 13:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:32.576 13:18:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.576 13:18:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.576 [2024-11-17 13:18:21.687764] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:32.576 [2024-11-17 13:18:21.687918] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:32.576 [2024-11-17 13:18:21.687956] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:08:32.576 [2024-11-17 13:18:21.687970] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:32.576 [2024-11-17 13:18:21.688504] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:32.576 [2024-11-17 13:18:21.688535] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:32.576 [2024-11-17 13:18:21.688645] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:32.576 [2024-11-17 13:18:21.688673] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:32.576 pt2 00:08:32.576 13:18:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.576 13:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:32.576 13:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:32.576 13:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:32.576 13:18:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.576 13:18:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.576 [2024-11-17 13:18:21.699719] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:32.576 [2024-11-17 13:18:21.699779] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:32.576 [2024-11-17 13:18:21.699796] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:08:32.576 [2024-11-17 13:18:21.699808] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:32.577 [2024-11-17 13:18:21.700263] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:32.577 [2024-11-17 13:18:21.700293] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:32.577 [2024-11-17 13:18:21.700372] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:08:32.577 [2024-11-17 13:18:21.700398] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:32.577 [2024-11-17 13:18:21.700549] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:32.577 [2024-11-17 13:18:21.700566] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:32.577 [2024-11-17 13:18:21.700934] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:08:32.577 [2024-11-17 13:18:21.701102] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:32.577 [2024-11-17 13:18:21.701112] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:32.577 [2024-11-17 13:18:21.701295] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:32.577 pt3 00:08:32.577 13:18:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.577 13:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:32.577 13:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:32.577 13:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:32.577 13:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:32.577 13:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:32.577 13:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:32.577 13:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:32.577 13:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:32.577 13:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:32.577 13:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:32.577 13:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:32.577 13:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:32.577 13:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:32.577 13:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:32.577 13:18:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.577 13:18:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.577 13:18:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.577 13:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:32.577 "name": "raid_bdev1", 00:08:32.577 "uuid": "0e96a99d-f927-4350-970a-3da13c23c550", 00:08:32.577 "strip_size_kb": 64, 00:08:32.577 "state": "online", 00:08:32.577 "raid_level": "raid0", 00:08:32.577 "superblock": true, 00:08:32.577 "num_base_bdevs": 3, 00:08:32.577 "num_base_bdevs_discovered": 3, 00:08:32.577 "num_base_bdevs_operational": 3, 00:08:32.577 "base_bdevs_list": [ 00:08:32.577 { 00:08:32.577 "name": "pt1", 00:08:32.577 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:32.577 "is_configured": true, 00:08:32.577 "data_offset": 2048, 00:08:32.577 "data_size": 63488 00:08:32.577 }, 00:08:32.577 { 00:08:32.577 "name": "pt2", 00:08:32.577 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:32.577 "is_configured": true, 00:08:32.577 "data_offset": 2048, 00:08:32.577 "data_size": 63488 00:08:32.577 }, 00:08:32.577 { 00:08:32.577 "name": "pt3", 00:08:32.577 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:32.577 "is_configured": true, 00:08:32.577 "data_offset": 2048, 00:08:32.577 "data_size": 63488 00:08:32.577 } 00:08:32.577 ] 00:08:32.577 }' 00:08:32.577 13:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:32.577 13:18:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.147 13:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:08:33.147 13:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:33.147 13:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:33.147 13:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:33.148 13:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:33.148 13:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:33.148 13:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:33.148 13:18:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.148 13:18:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.148 13:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:33.148 [2024-11-17 13:18:22.147377] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:33.148 13:18:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.148 13:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:33.148 "name": "raid_bdev1", 00:08:33.148 "aliases": [ 00:08:33.148 "0e96a99d-f927-4350-970a-3da13c23c550" 00:08:33.148 ], 00:08:33.148 "product_name": "Raid Volume", 00:08:33.148 "block_size": 512, 00:08:33.148 "num_blocks": 190464, 00:08:33.148 "uuid": "0e96a99d-f927-4350-970a-3da13c23c550", 00:08:33.148 "assigned_rate_limits": { 00:08:33.148 "rw_ios_per_sec": 0, 00:08:33.148 "rw_mbytes_per_sec": 0, 00:08:33.148 "r_mbytes_per_sec": 0, 00:08:33.148 "w_mbytes_per_sec": 0 00:08:33.148 }, 00:08:33.148 "claimed": false, 00:08:33.148 "zoned": false, 00:08:33.148 "supported_io_types": { 00:08:33.148 "read": true, 00:08:33.148 "write": true, 00:08:33.148 "unmap": true, 00:08:33.148 "flush": true, 00:08:33.148 "reset": true, 00:08:33.148 "nvme_admin": false, 00:08:33.148 "nvme_io": false, 00:08:33.148 "nvme_io_md": false, 00:08:33.148 "write_zeroes": true, 00:08:33.148 "zcopy": false, 00:08:33.148 "get_zone_info": false, 00:08:33.148 "zone_management": false, 00:08:33.148 "zone_append": false, 00:08:33.148 "compare": false, 00:08:33.148 "compare_and_write": false, 00:08:33.148 "abort": false, 00:08:33.148 "seek_hole": false, 00:08:33.148 "seek_data": false, 00:08:33.148 "copy": false, 00:08:33.148 "nvme_iov_md": false 00:08:33.148 }, 00:08:33.148 "memory_domains": [ 00:08:33.148 { 00:08:33.148 "dma_device_id": "system", 00:08:33.148 "dma_device_type": 1 00:08:33.148 }, 00:08:33.148 { 00:08:33.148 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:33.148 "dma_device_type": 2 00:08:33.148 }, 00:08:33.148 { 00:08:33.148 "dma_device_id": "system", 00:08:33.148 "dma_device_type": 1 00:08:33.148 }, 00:08:33.148 { 00:08:33.148 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:33.148 "dma_device_type": 2 00:08:33.148 }, 00:08:33.148 { 00:08:33.148 "dma_device_id": "system", 00:08:33.148 "dma_device_type": 1 00:08:33.148 }, 00:08:33.148 { 00:08:33.148 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:33.148 "dma_device_type": 2 00:08:33.148 } 00:08:33.148 ], 00:08:33.148 "driver_specific": { 00:08:33.148 "raid": { 00:08:33.148 "uuid": "0e96a99d-f927-4350-970a-3da13c23c550", 00:08:33.148 "strip_size_kb": 64, 00:08:33.148 "state": "online", 00:08:33.148 "raid_level": "raid0", 00:08:33.148 "superblock": true, 00:08:33.148 "num_base_bdevs": 3, 00:08:33.148 "num_base_bdevs_discovered": 3, 00:08:33.148 "num_base_bdevs_operational": 3, 00:08:33.148 "base_bdevs_list": [ 00:08:33.148 { 00:08:33.148 "name": "pt1", 00:08:33.148 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:33.148 "is_configured": true, 00:08:33.148 "data_offset": 2048, 00:08:33.148 "data_size": 63488 00:08:33.148 }, 00:08:33.148 { 00:08:33.148 "name": "pt2", 00:08:33.148 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:33.148 "is_configured": true, 00:08:33.148 "data_offset": 2048, 00:08:33.148 "data_size": 63488 00:08:33.148 }, 00:08:33.148 { 00:08:33.148 "name": "pt3", 00:08:33.148 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:33.148 "is_configured": true, 00:08:33.148 "data_offset": 2048, 00:08:33.148 "data_size": 63488 00:08:33.148 } 00:08:33.148 ] 00:08:33.148 } 00:08:33.148 } 00:08:33.148 }' 00:08:33.148 13:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:33.148 13:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:33.148 pt2 00:08:33.148 pt3' 00:08:33.148 13:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:33.148 13:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:33.148 13:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:33.148 13:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:33.148 13:18:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.148 13:18:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.148 13:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:33.148 13:18:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.148 13:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:33.148 13:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:33.148 13:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:33.148 13:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:33.148 13:18:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.148 13:18:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.148 13:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:33.148 13:18:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.148 13:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:33.148 13:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:33.148 13:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:33.148 13:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:08:33.148 13:18:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.148 13:18:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.148 13:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:33.408 13:18:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.408 13:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:33.408 13:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:33.408 13:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:33.408 13:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:08:33.408 13:18:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.408 13:18:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.408 [2024-11-17 13:18:22.410883] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:33.408 13:18:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.408 13:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 0e96a99d-f927-4350-970a-3da13c23c550 '!=' 0e96a99d-f927-4350-970a-3da13c23c550 ']' 00:08:33.408 13:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:08:33.408 13:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:33.408 13:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:33.408 13:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 65005 00:08:33.408 13:18:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 65005 ']' 00:08:33.408 13:18:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 65005 00:08:33.408 13:18:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:08:33.408 13:18:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:33.408 13:18:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65005 00:08:33.408 13:18:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:33.408 13:18:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:33.408 13:18:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65005' 00:08:33.408 killing process with pid 65005 00:08:33.408 13:18:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 65005 00:08:33.408 [2024-11-17 13:18:22.481184] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:33.408 [2024-11-17 13:18:22.481320] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:33.408 13:18:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 65005 00:08:33.408 [2024-11-17 13:18:22.481394] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:33.408 [2024-11-17 13:18:22.481408] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:33.667 [2024-11-17 13:18:22.809483] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:35.046 13:18:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:08:35.046 00:08:35.046 real 0m5.382s 00:08:35.046 user 0m7.708s 00:08:35.046 sys 0m0.878s 00:08:35.046 13:18:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:35.046 13:18:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.046 ************************************ 00:08:35.046 END TEST raid_superblock_test 00:08:35.046 ************************************ 00:08:35.046 13:18:24 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 3 read 00:08:35.046 13:18:24 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:35.046 13:18:24 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:35.046 13:18:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:35.046 ************************************ 00:08:35.046 START TEST raid_read_error_test 00:08:35.046 ************************************ 00:08:35.046 13:18:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 3 read 00:08:35.046 13:18:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:08:35.046 13:18:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:08:35.046 13:18:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:08:35.046 13:18:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:35.046 13:18:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:35.046 13:18:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:35.046 13:18:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:35.046 13:18:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:35.046 13:18:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:35.046 13:18:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:35.046 13:18:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:35.046 13:18:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:08:35.046 13:18:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:35.046 13:18:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:35.046 13:18:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:35.046 13:18:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:35.046 13:18:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:35.046 13:18:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:35.046 13:18:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:35.046 13:18:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:35.046 13:18:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:35.046 13:18:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:08:35.046 13:18:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:35.046 13:18:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:35.046 13:18:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:35.046 13:18:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.3AQ0zJjB2J 00:08:35.046 13:18:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65259 00:08:35.046 13:18:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65259 00:08:35.046 13:18:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:35.046 13:18:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 65259 ']' 00:08:35.046 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:35.046 13:18:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:35.046 13:18:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:35.046 13:18:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:35.046 13:18:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:35.046 13:18:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.046 [2024-11-17 13:18:24.161532] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:08:35.046 [2024-11-17 13:18:24.161665] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65259 ] 00:08:35.306 [2024-11-17 13:18:24.336293] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:35.306 [2024-11-17 13:18:24.456311] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:35.566 [2024-11-17 13:18:24.670794] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:35.566 [2024-11-17 13:18:24.670833] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:35.826 13:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:35.826 13:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:35.826 13:18:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:35.826 13:18:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:35.826 13:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.826 13:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.086 BaseBdev1_malloc 00:08:36.086 13:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.086 13:18:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:36.086 13:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.086 13:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.086 true 00:08:36.086 13:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.086 13:18:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:36.086 13:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.086 13:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.086 [2024-11-17 13:18:25.087965] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:36.086 [2024-11-17 13:18:25.088033] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:36.086 [2024-11-17 13:18:25.088057] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:36.086 [2024-11-17 13:18:25.088070] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:36.086 [2024-11-17 13:18:25.090541] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:36.086 [2024-11-17 13:18:25.090587] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:36.086 BaseBdev1 00:08:36.086 13:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.086 13:18:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:36.086 13:18:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:36.086 13:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.086 13:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.086 BaseBdev2_malloc 00:08:36.086 13:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.086 13:18:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:36.086 13:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.086 13:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.086 true 00:08:36.086 13:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.086 13:18:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:36.086 13:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.086 13:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.086 [2024-11-17 13:18:25.158695] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:36.086 [2024-11-17 13:18:25.158760] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:36.086 [2024-11-17 13:18:25.158779] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:36.086 [2024-11-17 13:18:25.158789] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:36.086 [2024-11-17 13:18:25.161117] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:36.086 [2024-11-17 13:18:25.161162] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:36.086 BaseBdev2 00:08:36.086 13:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.086 13:18:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:36.086 13:18:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:08:36.086 13:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.086 13:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.086 BaseBdev3_malloc 00:08:36.086 13:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.086 13:18:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:08:36.086 13:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.086 13:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.086 true 00:08:36.086 13:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.086 13:18:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:08:36.086 13:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.086 13:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.086 [2024-11-17 13:18:25.239068] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:08:36.087 [2024-11-17 13:18:25.239130] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:36.087 [2024-11-17 13:18:25.239152] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:08:36.087 [2024-11-17 13:18:25.239163] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:36.087 [2024-11-17 13:18:25.241507] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:36.087 [2024-11-17 13:18:25.241593] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:08:36.087 BaseBdev3 00:08:36.087 13:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.087 13:18:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:08:36.087 13:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.087 13:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.087 [2024-11-17 13:18:25.251118] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:36.087 [2024-11-17 13:18:25.253167] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:36.087 [2024-11-17 13:18:25.253343] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:36.087 [2024-11-17 13:18:25.253587] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:36.087 [2024-11-17 13:18:25.253605] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:36.087 [2024-11-17 13:18:25.253899] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:08:36.087 [2024-11-17 13:18:25.254080] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:36.087 [2024-11-17 13:18:25.254095] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:08:36.087 [2024-11-17 13:18:25.254291] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:36.087 13:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.087 13:18:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:36.087 13:18:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:36.087 13:18:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:36.087 13:18:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:36.087 13:18:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:36.087 13:18:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:36.087 13:18:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:36.087 13:18:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:36.087 13:18:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:36.087 13:18:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:36.087 13:18:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:36.087 13:18:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:36.087 13:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.087 13:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.087 13:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.087 13:18:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:36.087 "name": "raid_bdev1", 00:08:36.087 "uuid": "962a34b6-c053-463c-a585-1bfe9c29b699", 00:08:36.087 "strip_size_kb": 64, 00:08:36.087 "state": "online", 00:08:36.087 "raid_level": "raid0", 00:08:36.087 "superblock": true, 00:08:36.087 "num_base_bdevs": 3, 00:08:36.087 "num_base_bdevs_discovered": 3, 00:08:36.087 "num_base_bdevs_operational": 3, 00:08:36.087 "base_bdevs_list": [ 00:08:36.087 { 00:08:36.087 "name": "BaseBdev1", 00:08:36.087 "uuid": "0a88adc4-caff-5a6c-ac58-74d53cf302a4", 00:08:36.087 "is_configured": true, 00:08:36.087 "data_offset": 2048, 00:08:36.087 "data_size": 63488 00:08:36.087 }, 00:08:36.087 { 00:08:36.087 "name": "BaseBdev2", 00:08:36.087 "uuid": "83399866-b8af-58a1-ad15-bf0f5b374d51", 00:08:36.087 "is_configured": true, 00:08:36.087 "data_offset": 2048, 00:08:36.087 "data_size": 63488 00:08:36.087 }, 00:08:36.087 { 00:08:36.087 "name": "BaseBdev3", 00:08:36.087 "uuid": "668a6fdb-3785-50ed-a074-4c275c0a50e6", 00:08:36.087 "is_configured": true, 00:08:36.087 "data_offset": 2048, 00:08:36.087 "data_size": 63488 00:08:36.087 } 00:08:36.087 ] 00:08:36.087 }' 00:08:36.087 13:18:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:36.346 13:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.605 13:18:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:36.605 13:18:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:36.605 [2024-11-17 13:18:25.755897] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:08:37.544 13:18:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:37.544 13:18:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.544 13:18:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.544 13:18:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.544 13:18:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:37.544 13:18:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:08:37.544 13:18:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:08:37.544 13:18:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:37.544 13:18:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:37.544 13:18:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:37.544 13:18:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:37.544 13:18:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:37.544 13:18:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:37.544 13:18:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:37.544 13:18:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:37.544 13:18:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:37.544 13:18:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:37.544 13:18:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:37.544 13:18:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:37.544 13:18:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.544 13:18:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.544 13:18:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.544 13:18:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:37.544 "name": "raid_bdev1", 00:08:37.544 "uuid": "962a34b6-c053-463c-a585-1bfe9c29b699", 00:08:37.544 "strip_size_kb": 64, 00:08:37.544 "state": "online", 00:08:37.544 "raid_level": "raid0", 00:08:37.544 "superblock": true, 00:08:37.544 "num_base_bdevs": 3, 00:08:37.544 "num_base_bdevs_discovered": 3, 00:08:37.544 "num_base_bdevs_operational": 3, 00:08:37.544 "base_bdevs_list": [ 00:08:37.544 { 00:08:37.544 "name": "BaseBdev1", 00:08:37.544 "uuid": "0a88adc4-caff-5a6c-ac58-74d53cf302a4", 00:08:37.544 "is_configured": true, 00:08:37.544 "data_offset": 2048, 00:08:37.544 "data_size": 63488 00:08:37.544 }, 00:08:37.544 { 00:08:37.544 "name": "BaseBdev2", 00:08:37.544 "uuid": "83399866-b8af-58a1-ad15-bf0f5b374d51", 00:08:37.544 "is_configured": true, 00:08:37.544 "data_offset": 2048, 00:08:37.544 "data_size": 63488 00:08:37.545 }, 00:08:37.545 { 00:08:37.545 "name": "BaseBdev3", 00:08:37.545 "uuid": "668a6fdb-3785-50ed-a074-4c275c0a50e6", 00:08:37.545 "is_configured": true, 00:08:37.545 "data_offset": 2048, 00:08:37.545 "data_size": 63488 00:08:37.545 } 00:08:37.545 ] 00:08:37.545 }' 00:08:37.545 13:18:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:37.545 13:18:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.114 13:18:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:38.114 13:18:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.114 13:18:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.114 [2024-11-17 13:18:27.098592] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:38.114 [2024-11-17 13:18:27.098624] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:38.114 [2024-11-17 13:18:27.101365] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:38.114 [2024-11-17 13:18:27.101450] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:38.114 [2024-11-17 13:18:27.101522] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:38.114 [2024-11-17 13:18:27.101599] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:08:38.114 { 00:08:38.114 "results": [ 00:08:38.114 { 00:08:38.114 "job": "raid_bdev1", 00:08:38.114 "core_mask": "0x1", 00:08:38.114 "workload": "randrw", 00:08:38.114 "percentage": 50, 00:08:38.114 "status": "finished", 00:08:38.114 "queue_depth": 1, 00:08:38.114 "io_size": 131072, 00:08:38.114 "runtime": 1.343045, 00:08:38.114 "iops": 15256.376368625028, 00:08:38.114 "mibps": 1907.0470460781285, 00:08:38.114 "io_failed": 1, 00:08:38.114 "io_timeout": 0, 00:08:38.114 "avg_latency_us": 91.0953704033233, 00:08:38.114 "min_latency_us": 21.910917030567685, 00:08:38.114 "max_latency_us": 1395.1441048034935 00:08:38.114 } 00:08:38.114 ], 00:08:38.114 "core_count": 1 00:08:38.114 } 00:08:38.114 13:18:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.114 13:18:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65259 00:08:38.114 13:18:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 65259 ']' 00:08:38.114 13:18:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 65259 00:08:38.114 13:18:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:08:38.114 13:18:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:38.114 13:18:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65259 00:08:38.114 13:18:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:38.114 13:18:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:38.114 13:18:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65259' 00:08:38.114 killing process with pid 65259 00:08:38.114 13:18:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 65259 00:08:38.114 [2024-11-17 13:18:27.148787] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:38.114 13:18:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 65259 00:08:38.374 [2024-11-17 13:18:27.378186] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:39.314 13:18:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:39.314 13:18:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.3AQ0zJjB2J 00:08:39.314 13:18:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:39.314 13:18:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:08:39.314 13:18:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:08:39.314 13:18:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:39.314 13:18:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:39.314 13:18:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:08:39.314 00:08:39.314 real 0m4.465s 00:08:39.314 user 0m5.292s 00:08:39.314 sys 0m0.563s 00:08:39.314 13:18:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:39.314 13:18:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.314 ************************************ 00:08:39.314 END TEST raid_read_error_test 00:08:39.314 ************************************ 00:08:39.574 13:18:28 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 3 write 00:08:39.574 13:18:28 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:39.574 13:18:28 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:39.574 13:18:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:39.574 ************************************ 00:08:39.574 START TEST raid_write_error_test 00:08:39.574 ************************************ 00:08:39.574 13:18:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 3 write 00:08:39.574 13:18:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:08:39.574 13:18:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:08:39.574 13:18:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:08:39.574 13:18:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:39.574 13:18:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:39.574 13:18:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:39.574 13:18:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:39.574 13:18:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:39.574 13:18:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:39.574 13:18:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:39.574 13:18:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:39.574 13:18:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:08:39.574 13:18:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:39.574 13:18:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:39.574 13:18:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:39.574 13:18:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:39.574 13:18:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:39.574 13:18:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:39.574 13:18:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:39.574 13:18:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:39.574 13:18:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:39.574 13:18:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:08:39.574 13:18:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:39.574 13:18:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:39.574 13:18:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:39.574 13:18:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.sJvPpm5LXS 00:08:39.574 13:18:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65399 00:08:39.574 13:18:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:39.574 13:18:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65399 00:08:39.574 13:18:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 65399 ']' 00:08:39.574 13:18:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:39.574 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:39.575 13:18:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:39.575 13:18:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:39.575 13:18:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:39.575 13:18:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.575 [2024-11-17 13:18:28.688868] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:08:39.575 [2024-11-17 13:18:28.688983] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65399 ] 00:08:39.834 [2024-11-17 13:18:28.860990] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:39.834 [2024-11-17 13:18:28.979528] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:40.098 [2024-11-17 13:18:29.182360] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:40.098 [2024-11-17 13:18:29.182430] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:40.362 13:18:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:40.362 13:18:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:40.362 13:18:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:40.362 13:18:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:40.362 13:18:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.362 13:18:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.362 BaseBdev1_malloc 00:08:40.362 13:18:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.362 13:18:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:40.362 13:18:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.362 13:18:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.362 true 00:08:40.362 13:18:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.362 13:18:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:40.362 13:18:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.362 13:18:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.362 [2024-11-17 13:18:29.553090] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:40.362 [2024-11-17 13:18:29.553148] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:40.362 [2024-11-17 13:18:29.553169] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:40.362 [2024-11-17 13:18:29.553181] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:40.362 [2024-11-17 13:18:29.555368] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:40.362 [2024-11-17 13:18:29.555408] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:40.362 BaseBdev1 00:08:40.362 13:18:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.362 13:18:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:40.362 13:18:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:40.362 13:18:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.362 13:18:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.623 BaseBdev2_malloc 00:08:40.623 13:18:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.623 13:18:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:40.623 13:18:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.623 13:18:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.623 true 00:08:40.623 13:18:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.623 13:18:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:40.623 13:18:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.623 13:18:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.623 [2024-11-17 13:18:29.618909] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:40.623 [2024-11-17 13:18:29.618970] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:40.623 [2024-11-17 13:18:29.619004] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:40.623 [2024-11-17 13:18:29.619015] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:40.623 [2024-11-17 13:18:29.621174] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:40.623 [2024-11-17 13:18:29.621227] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:40.623 BaseBdev2 00:08:40.623 13:18:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.623 13:18:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:40.623 13:18:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:08:40.623 13:18:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.623 13:18:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.623 BaseBdev3_malloc 00:08:40.623 13:18:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.623 13:18:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:08:40.623 13:18:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.623 13:18:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.623 true 00:08:40.623 13:18:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.623 13:18:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:08:40.623 13:18:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.623 13:18:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.623 [2024-11-17 13:18:29.695743] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:08:40.623 [2024-11-17 13:18:29.695849] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:40.623 [2024-11-17 13:18:29.695870] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:08:40.623 [2024-11-17 13:18:29.695881] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:40.623 [2024-11-17 13:18:29.698028] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:40.623 [2024-11-17 13:18:29.698068] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:08:40.623 BaseBdev3 00:08:40.623 13:18:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.623 13:18:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:08:40.623 13:18:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.623 13:18:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.623 [2024-11-17 13:18:29.707790] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:40.623 [2024-11-17 13:18:29.709555] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:40.623 [2024-11-17 13:18:29.709634] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:40.623 [2024-11-17 13:18:29.709814] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:40.623 [2024-11-17 13:18:29.709828] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:40.623 [2024-11-17 13:18:29.710072] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:08:40.623 [2024-11-17 13:18:29.710213] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:40.623 [2024-11-17 13:18:29.710238] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:08:40.623 [2024-11-17 13:18:29.710384] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:40.623 13:18:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.623 13:18:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:40.623 13:18:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:40.623 13:18:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:40.623 13:18:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:40.623 13:18:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:40.623 13:18:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:40.623 13:18:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:40.623 13:18:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:40.623 13:18:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:40.623 13:18:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:40.623 13:18:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:40.623 13:18:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:40.623 13:18:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.623 13:18:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.623 13:18:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.623 13:18:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:40.623 "name": "raid_bdev1", 00:08:40.623 "uuid": "1a7e7772-cc90-486d-aa34-46c3e188f0ef", 00:08:40.623 "strip_size_kb": 64, 00:08:40.623 "state": "online", 00:08:40.623 "raid_level": "raid0", 00:08:40.623 "superblock": true, 00:08:40.623 "num_base_bdevs": 3, 00:08:40.623 "num_base_bdevs_discovered": 3, 00:08:40.623 "num_base_bdevs_operational": 3, 00:08:40.623 "base_bdevs_list": [ 00:08:40.623 { 00:08:40.623 "name": "BaseBdev1", 00:08:40.623 "uuid": "05bb0a86-03d7-5815-85fe-b841f6333c9e", 00:08:40.623 "is_configured": true, 00:08:40.623 "data_offset": 2048, 00:08:40.623 "data_size": 63488 00:08:40.623 }, 00:08:40.623 { 00:08:40.623 "name": "BaseBdev2", 00:08:40.623 "uuid": "674c7fa8-b586-520c-ab56-64c6360766b1", 00:08:40.623 "is_configured": true, 00:08:40.623 "data_offset": 2048, 00:08:40.623 "data_size": 63488 00:08:40.623 }, 00:08:40.623 { 00:08:40.623 "name": "BaseBdev3", 00:08:40.623 "uuid": "54d34633-002d-50b1-93d0-6d14483baa50", 00:08:40.623 "is_configured": true, 00:08:40.623 "data_offset": 2048, 00:08:40.623 "data_size": 63488 00:08:40.623 } 00:08:40.623 ] 00:08:40.623 }' 00:08:40.623 13:18:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:40.623 13:18:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.192 13:18:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:41.192 13:18:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:41.192 [2024-11-17 13:18:30.216116] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:08:42.132 13:18:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:08:42.132 13:18:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.132 13:18:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.132 13:18:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.132 13:18:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:42.132 13:18:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:08:42.132 13:18:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:08:42.132 13:18:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:42.132 13:18:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:42.132 13:18:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:42.132 13:18:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:42.132 13:18:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:42.132 13:18:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:42.132 13:18:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:42.132 13:18:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:42.132 13:18:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:42.132 13:18:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:42.133 13:18:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:42.133 13:18:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:42.133 13:18:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.133 13:18:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.133 13:18:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.133 13:18:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:42.133 "name": "raid_bdev1", 00:08:42.133 "uuid": "1a7e7772-cc90-486d-aa34-46c3e188f0ef", 00:08:42.133 "strip_size_kb": 64, 00:08:42.133 "state": "online", 00:08:42.133 "raid_level": "raid0", 00:08:42.133 "superblock": true, 00:08:42.133 "num_base_bdevs": 3, 00:08:42.133 "num_base_bdevs_discovered": 3, 00:08:42.133 "num_base_bdevs_operational": 3, 00:08:42.133 "base_bdevs_list": [ 00:08:42.133 { 00:08:42.133 "name": "BaseBdev1", 00:08:42.133 "uuid": "05bb0a86-03d7-5815-85fe-b841f6333c9e", 00:08:42.133 "is_configured": true, 00:08:42.133 "data_offset": 2048, 00:08:42.133 "data_size": 63488 00:08:42.133 }, 00:08:42.133 { 00:08:42.133 "name": "BaseBdev2", 00:08:42.133 "uuid": "674c7fa8-b586-520c-ab56-64c6360766b1", 00:08:42.133 "is_configured": true, 00:08:42.133 "data_offset": 2048, 00:08:42.133 "data_size": 63488 00:08:42.133 }, 00:08:42.133 { 00:08:42.133 "name": "BaseBdev3", 00:08:42.133 "uuid": "54d34633-002d-50b1-93d0-6d14483baa50", 00:08:42.133 "is_configured": true, 00:08:42.133 "data_offset": 2048, 00:08:42.133 "data_size": 63488 00:08:42.133 } 00:08:42.133 ] 00:08:42.133 }' 00:08:42.133 13:18:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:42.133 13:18:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.393 13:18:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:42.394 13:18:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.394 13:18:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.394 [2024-11-17 13:18:31.559827] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:42.394 [2024-11-17 13:18:31.559923] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:42.394 [2024-11-17 13:18:31.562627] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:42.394 [2024-11-17 13:18:31.562714] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:42.394 [2024-11-17 13:18:31.562771] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:42.394 [2024-11-17 13:18:31.562809] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:08:42.394 { 00:08:42.394 "results": [ 00:08:42.394 { 00:08:42.394 "job": "raid_bdev1", 00:08:42.394 "core_mask": "0x1", 00:08:42.394 "workload": "randrw", 00:08:42.394 "percentage": 50, 00:08:42.394 "status": "finished", 00:08:42.394 "queue_depth": 1, 00:08:42.394 "io_size": 131072, 00:08:42.394 "runtime": 1.344498, 00:08:42.394 "iops": 16113.07714849706, 00:08:42.394 "mibps": 2014.1346435621324, 00:08:42.394 "io_failed": 1, 00:08:42.394 "io_timeout": 0, 00:08:42.394 "avg_latency_us": 86.30257556258107, 00:08:42.394 "min_latency_us": 20.234061135371178, 00:08:42.394 "max_latency_us": 1523.926637554585 00:08:42.394 } 00:08:42.394 ], 00:08:42.394 "core_count": 1 00:08:42.394 } 00:08:42.394 13:18:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.394 13:18:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65399 00:08:42.394 13:18:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 65399 ']' 00:08:42.394 13:18:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 65399 00:08:42.394 13:18:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:08:42.394 13:18:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:42.394 13:18:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65399 00:08:42.394 13:18:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:42.394 13:18:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:42.394 13:18:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65399' 00:08:42.394 killing process with pid 65399 00:08:42.394 13:18:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 65399 00:08:42.394 [2024-11-17 13:18:31.600550] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:42.394 13:18:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 65399 00:08:42.654 [2024-11-17 13:18:31.828453] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:44.036 13:18:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.sJvPpm5LXS 00:08:44.036 13:18:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:44.036 13:18:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:44.036 13:18:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:08:44.036 ************************************ 00:08:44.036 END TEST raid_write_error_test 00:08:44.036 ************************************ 00:08:44.036 13:18:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:08:44.036 13:18:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:44.036 13:18:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:44.036 13:18:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:08:44.036 00:08:44.036 real 0m4.415s 00:08:44.036 user 0m5.206s 00:08:44.036 sys 0m0.532s 00:08:44.036 13:18:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:44.036 13:18:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.036 13:18:33 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:44.036 13:18:33 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:08:44.036 13:18:33 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:44.036 13:18:33 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:44.036 13:18:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:44.036 ************************************ 00:08:44.036 START TEST raid_state_function_test 00:08:44.036 ************************************ 00:08:44.036 13:18:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 3 false 00:08:44.036 13:18:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:08:44.036 13:18:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:44.036 13:18:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:44.036 13:18:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:44.036 13:18:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:44.036 13:18:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:44.036 13:18:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:44.036 13:18:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:44.036 13:18:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:44.036 13:18:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:44.036 13:18:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:44.036 13:18:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:44.036 13:18:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:44.036 13:18:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:44.036 13:18:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:44.036 13:18:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:44.036 13:18:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:44.036 13:18:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:44.036 13:18:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:44.036 13:18:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:44.036 13:18:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:44.036 13:18:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:08:44.036 13:18:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:44.036 13:18:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:44.036 13:18:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:44.036 13:18:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:44.036 Process raid pid: 65543 00:08:44.036 13:18:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=65543 00:08:44.036 13:18:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:44.036 13:18:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 65543' 00:08:44.037 13:18:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 65543 00:08:44.037 13:18:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 65543 ']' 00:08:44.037 13:18:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:44.037 13:18:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:44.037 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:44.037 13:18:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:44.037 13:18:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:44.037 13:18:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.037 [2024-11-17 13:18:33.159876] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:08:44.037 [2024-11-17 13:18:33.159998] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:44.296 [2024-11-17 13:18:33.337077] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:44.296 [2024-11-17 13:18:33.455868] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:44.556 [2024-11-17 13:18:33.661758] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:44.556 [2024-11-17 13:18:33.661890] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:44.816 13:18:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:44.816 13:18:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:08:44.816 13:18:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:44.816 13:18:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.816 13:18:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.816 [2024-11-17 13:18:34.003256] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:44.816 [2024-11-17 13:18:34.003367] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:44.816 [2024-11-17 13:18:34.003414] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:44.816 [2024-11-17 13:18:34.003439] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:44.816 [2024-11-17 13:18:34.003459] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:44.816 [2024-11-17 13:18:34.003480] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:44.816 13:18:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.816 13:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:44.816 13:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:44.816 13:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:44.816 13:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:44.816 13:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:44.816 13:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:44.816 13:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:44.816 13:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:44.816 13:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:44.816 13:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:44.816 13:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:44.816 13:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:44.816 13:18:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.816 13:18:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.816 13:18:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.075 13:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:45.075 "name": "Existed_Raid", 00:08:45.075 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:45.075 "strip_size_kb": 64, 00:08:45.075 "state": "configuring", 00:08:45.075 "raid_level": "concat", 00:08:45.075 "superblock": false, 00:08:45.075 "num_base_bdevs": 3, 00:08:45.075 "num_base_bdevs_discovered": 0, 00:08:45.075 "num_base_bdevs_operational": 3, 00:08:45.075 "base_bdevs_list": [ 00:08:45.075 { 00:08:45.075 "name": "BaseBdev1", 00:08:45.075 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:45.075 "is_configured": false, 00:08:45.075 "data_offset": 0, 00:08:45.075 "data_size": 0 00:08:45.075 }, 00:08:45.075 { 00:08:45.075 "name": "BaseBdev2", 00:08:45.075 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:45.075 "is_configured": false, 00:08:45.075 "data_offset": 0, 00:08:45.075 "data_size": 0 00:08:45.075 }, 00:08:45.075 { 00:08:45.075 "name": "BaseBdev3", 00:08:45.075 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:45.075 "is_configured": false, 00:08:45.075 "data_offset": 0, 00:08:45.075 "data_size": 0 00:08:45.075 } 00:08:45.075 ] 00:08:45.075 }' 00:08:45.075 13:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:45.075 13:18:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.336 13:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:45.336 13:18:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.336 13:18:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.336 [2024-11-17 13:18:34.438464] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:45.336 [2024-11-17 13:18:34.438505] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:45.336 13:18:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.336 13:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:45.336 13:18:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.336 13:18:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.336 [2024-11-17 13:18:34.446435] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:45.336 [2024-11-17 13:18:34.446537] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:45.336 [2024-11-17 13:18:34.446572] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:45.336 [2024-11-17 13:18:34.446631] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:45.336 [2024-11-17 13:18:34.446668] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:45.336 [2024-11-17 13:18:34.446706] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:45.336 13:18:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.336 13:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:45.336 13:18:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.336 13:18:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.336 [2024-11-17 13:18:34.491219] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:45.336 BaseBdev1 00:08:45.336 13:18:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.336 13:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:45.336 13:18:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:45.336 13:18:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:45.336 13:18:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:45.336 13:18:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:45.336 13:18:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:45.336 13:18:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:45.336 13:18:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.336 13:18:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.336 13:18:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.336 13:18:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:45.336 13:18:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.336 13:18:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.336 [ 00:08:45.336 { 00:08:45.336 "name": "BaseBdev1", 00:08:45.336 "aliases": [ 00:08:45.336 "cc82e1a0-b0ca-4ae9-862a-4cd21c1e5ca1" 00:08:45.336 ], 00:08:45.336 "product_name": "Malloc disk", 00:08:45.336 "block_size": 512, 00:08:45.336 "num_blocks": 65536, 00:08:45.336 "uuid": "cc82e1a0-b0ca-4ae9-862a-4cd21c1e5ca1", 00:08:45.336 "assigned_rate_limits": { 00:08:45.336 "rw_ios_per_sec": 0, 00:08:45.336 "rw_mbytes_per_sec": 0, 00:08:45.336 "r_mbytes_per_sec": 0, 00:08:45.336 "w_mbytes_per_sec": 0 00:08:45.336 }, 00:08:45.336 "claimed": true, 00:08:45.336 "claim_type": "exclusive_write", 00:08:45.336 "zoned": false, 00:08:45.336 "supported_io_types": { 00:08:45.336 "read": true, 00:08:45.336 "write": true, 00:08:45.336 "unmap": true, 00:08:45.336 "flush": true, 00:08:45.336 "reset": true, 00:08:45.336 "nvme_admin": false, 00:08:45.336 "nvme_io": false, 00:08:45.336 "nvme_io_md": false, 00:08:45.336 "write_zeroes": true, 00:08:45.336 "zcopy": true, 00:08:45.336 "get_zone_info": false, 00:08:45.336 "zone_management": false, 00:08:45.336 "zone_append": false, 00:08:45.336 "compare": false, 00:08:45.336 "compare_and_write": false, 00:08:45.336 "abort": true, 00:08:45.336 "seek_hole": false, 00:08:45.336 "seek_data": false, 00:08:45.336 "copy": true, 00:08:45.336 "nvme_iov_md": false 00:08:45.336 }, 00:08:45.336 "memory_domains": [ 00:08:45.336 { 00:08:45.336 "dma_device_id": "system", 00:08:45.336 "dma_device_type": 1 00:08:45.336 }, 00:08:45.336 { 00:08:45.336 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:45.336 "dma_device_type": 2 00:08:45.336 } 00:08:45.336 ], 00:08:45.336 "driver_specific": {} 00:08:45.336 } 00:08:45.336 ] 00:08:45.336 13:18:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.336 13:18:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:45.336 13:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:45.336 13:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:45.336 13:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:45.336 13:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:45.336 13:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:45.336 13:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:45.336 13:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:45.336 13:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:45.336 13:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:45.336 13:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:45.336 13:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:45.336 13:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:45.336 13:18:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.336 13:18:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.336 13:18:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.596 13:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:45.596 "name": "Existed_Raid", 00:08:45.596 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:45.596 "strip_size_kb": 64, 00:08:45.596 "state": "configuring", 00:08:45.596 "raid_level": "concat", 00:08:45.596 "superblock": false, 00:08:45.596 "num_base_bdevs": 3, 00:08:45.596 "num_base_bdevs_discovered": 1, 00:08:45.596 "num_base_bdevs_operational": 3, 00:08:45.596 "base_bdevs_list": [ 00:08:45.596 { 00:08:45.596 "name": "BaseBdev1", 00:08:45.596 "uuid": "cc82e1a0-b0ca-4ae9-862a-4cd21c1e5ca1", 00:08:45.596 "is_configured": true, 00:08:45.596 "data_offset": 0, 00:08:45.596 "data_size": 65536 00:08:45.596 }, 00:08:45.596 { 00:08:45.596 "name": "BaseBdev2", 00:08:45.596 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:45.596 "is_configured": false, 00:08:45.596 "data_offset": 0, 00:08:45.596 "data_size": 0 00:08:45.596 }, 00:08:45.596 { 00:08:45.596 "name": "BaseBdev3", 00:08:45.596 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:45.596 "is_configured": false, 00:08:45.596 "data_offset": 0, 00:08:45.596 "data_size": 0 00:08:45.596 } 00:08:45.596 ] 00:08:45.596 }' 00:08:45.596 13:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:45.596 13:18:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.857 13:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:45.857 13:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.857 13:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.857 [2024-11-17 13:18:35.006401] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:45.857 [2024-11-17 13:18:35.006463] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:45.857 13:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.857 13:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:45.857 13:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.857 13:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.857 [2024-11-17 13:18:35.014444] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:45.857 [2024-11-17 13:18:35.016361] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:45.857 [2024-11-17 13:18:35.016407] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:45.857 [2024-11-17 13:18:35.016418] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:45.857 [2024-11-17 13:18:35.016427] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:45.857 13:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.857 13:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:45.857 13:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:45.857 13:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:45.857 13:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:45.857 13:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:45.857 13:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:45.857 13:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:45.857 13:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:45.857 13:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:45.857 13:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:45.857 13:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:45.857 13:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:45.857 13:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:45.857 13:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:45.857 13:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.857 13:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.857 13:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.857 13:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:45.857 "name": "Existed_Raid", 00:08:45.857 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:45.857 "strip_size_kb": 64, 00:08:45.857 "state": "configuring", 00:08:45.857 "raid_level": "concat", 00:08:45.857 "superblock": false, 00:08:45.857 "num_base_bdevs": 3, 00:08:45.857 "num_base_bdevs_discovered": 1, 00:08:45.857 "num_base_bdevs_operational": 3, 00:08:45.857 "base_bdevs_list": [ 00:08:45.857 { 00:08:45.857 "name": "BaseBdev1", 00:08:45.857 "uuid": "cc82e1a0-b0ca-4ae9-862a-4cd21c1e5ca1", 00:08:45.857 "is_configured": true, 00:08:45.857 "data_offset": 0, 00:08:45.857 "data_size": 65536 00:08:45.857 }, 00:08:45.857 { 00:08:45.857 "name": "BaseBdev2", 00:08:45.857 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:45.857 "is_configured": false, 00:08:45.857 "data_offset": 0, 00:08:45.857 "data_size": 0 00:08:45.857 }, 00:08:45.857 { 00:08:45.857 "name": "BaseBdev3", 00:08:45.857 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:45.857 "is_configured": false, 00:08:45.857 "data_offset": 0, 00:08:45.857 "data_size": 0 00:08:45.857 } 00:08:45.857 ] 00:08:45.857 }' 00:08:45.857 13:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:45.857 13:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.427 13:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:46.427 13:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.427 13:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.427 [2024-11-17 13:18:35.504876] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:46.427 BaseBdev2 00:08:46.427 13:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.427 13:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:46.427 13:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:46.427 13:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:46.427 13:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:46.427 13:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:46.427 13:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:46.427 13:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:46.427 13:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.427 13:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.427 13:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.427 13:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:46.427 13:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.427 13:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.427 [ 00:08:46.427 { 00:08:46.427 "name": "BaseBdev2", 00:08:46.427 "aliases": [ 00:08:46.427 "85b9f901-abe5-487b-84d7-bfe3b9be970a" 00:08:46.427 ], 00:08:46.427 "product_name": "Malloc disk", 00:08:46.427 "block_size": 512, 00:08:46.427 "num_blocks": 65536, 00:08:46.427 "uuid": "85b9f901-abe5-487b-84d7-bfe3b9be970a", 00:08:46.427 "assigned_rate_limits": { 00:08:46.427 "rw_ios_per_sec": 0, 00:08:46.427 "rw_mbytes_per_sec": 0, 00:08:46.427 "r_mbytes_per_sec": 0, 00:08:46.427 "w_mbytes_per_sec": 0 00:08:46.427 }, 00:08:46.427 "claimed": true, 00:08:46.427 "claim_type": "exclusive_write", 00:08:46.427 "zoned": false, 00:08:46.427 "supported_io_types": { 00:08:46.427 "read": true, 00:08:46.427 "write": true, 00:08:46.427 "unmap": true, 00:08:46.427 "flush": true, 00:08:46.427 "reset": true, 00:08:46.427 "nvme_admin": false, 00:08:46.427 "nvme_io": false, 00:08:46.427 "nvme_io_md": false, 00:08:46.427 "write_zeroes": true, 00:08:46.427 "zcopy": true, 00:08:46.427 "get_zone_info": false, 00:08:46.427 "zone_management": false, 00:08:46.427 "zone_append": false, 00:08:46.427 "compare": false, 00:08:46.427 "compare_and_write": false, 00:08:46.427 "abort": true, 00:08:46.427 "seek_hole": false, 00:08:46.427 "seek_data": false, 00:08:46.427 "copy": true, 00:08:46.427 "nvme_iov_md": false 00:08:46.427 }, 00:08:46.427 "memory_domains": [ 00:08:46.427 { 00:08:46.427 "dma_device_id": "system", 00:08:46.427 "dma_device_type": 1 00:08:46.427 }, 00:08:46.427 { 00:08:46.427 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:46.427 "dma_device_type": 2 00:08:46.427 } 00:08:46.427 ], 00:08:46.427 "driver_specific": {} 00:08:46.427 } 00:08:46.427 ] 00:08:46.427 13:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.427 13:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:46.427 13:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:46.427 13:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:46.428 13:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:46.428 13:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:46.428 13:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:46.428 13:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:46.428 13:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:46.428 13:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:46.428 13:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:46.428 13:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:46.428 13:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:46.428 13:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:46.428 13:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:46.428 13:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:46.428 13:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.428 13:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.428 13:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.428 13:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:46.428 "name": "Existed_Raid", 00:08:46.428 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:46.428 "strip_size_kb": 64, 00:08:46.428 "state": "configuring", 00:08:46.428 "raid_level": "concat", 00:08:46.428 "superblock": false, 00:08:46.428 "num_base_bdevs": 3, 00:08:46.428 "num_base_bdevs_discovered": 2, 00:08:46.428 "num_base_bdevs_operational": 3, 00:08:46.428 "base_bdevs_list": [ 00:08:46.428 { 00:08:46.428 "name": "BaseBdev1", 00:08:46.428 "uuid": "cc82e1a0-b0ca-4ae9-862a-4cd21c1e5ca1", 00:08:46.428 "is_configured": true, 00:08:46.428 "data_offset": 0, 00:08:46.428 "data_size": 65536 00:08:46.428 }, 00:08:46.428 { 00:08:46.428 "name": "BaseBdev2", 00:08:46.428 "uuid": "85b9f901-abe5-487b-84d7-bfe3b9be970a", 00:08:46.428 "is_configured": true, 00:08:46.428 "data_offset": 0, 00:08:46.428 "data_size": 65536 00:08:46.428 }, 00:08:46.428 { 00:08:46.428 "name": "BaseBdev3", 00:08:46.428 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:46.428 "is_configured": false, 00:08:46.428 "data_offset": 0, 00:08:46.428 "data_size": 0 00:08:46.428 } 00:08:46.428 ] 00:08:46.428 }' 00:08:46.428 13:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:46.428 13:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.014 13:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:47.014 13:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.014 13:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.014 [2024-11-17 13:18:36.014762] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:47.014 [2024-11-17 13:18:36.014810] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:47.014 [2024-11-17 13:18:36.014822] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:47.014 [2024-11-17 13:18:36.015104] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:47.014 [2024-11-17 13:18:36.015289] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:47.014 [2024-11-17 13:18:36.015301] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:47.014 [2024-11-17 13:18:36.015568] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:47.014 BaseBdev3 00:08:47.014 13:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.014 13:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:47.014 13:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:47.014 13:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:47.014 13:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:47.014 13:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:47.014 13:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:47.014 13:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:47.014 13:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.014 13:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.014 13:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.014 13:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:47.014 13:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.014 13:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.014 [ 00:08:47.014 { 00:08:47.014 "name": "BaseBdev3", 00:08:47.014 "aliases": [ 00:08:47.014 "32214314-b8e7-41c2-b4d8-4126281093dc" 00:08:47.014 ], 00:08:47.014 "product_name": "Malloc disk", 00:08:47.014 "block_size": 512, 00:08:47.014 "num_blocks": 65536, 00:08:47.014 "uuid": "32214314-b8e7-41c2-b4d8-4126281093dc", 00:08:47.014 "assigned_rate_limits": { 00:08:47.014 "rw_ios_per_sec": 0, 00:08:47.014 "rw_mbytes_per_sec": 0, 00:08:47.014 "r_mbytes_per_sec": 0, 00:08:47.014 "w_mbytes_per_sec": 0 00:08:47.014 }, 00:08:47.014 "claimed": true, 00:08:47.014 "claim_type": "exclusive_write", 00:08:47.014 "zoned": false, 00:08:47.014 "supported_io_types": { 00:08:47.014 "read": true, 00:08:47.014 "write": true, 00:08:47.014 "unmap": true, 00:08:47.014 "flush": true, 00:08:47.014 "reset": true, 00:08:47.014 "nvme_admin": false, 00:08:47.014 "nvme_io": false, 00:08:47.014 "nvme_io_md": false, 00:08:47.014 "write_zeroes": true, 00:08:47.014 "zcopy": true, 00:08:47.014 "get_zone_info": false, 00:08:47.014 "zone_management": false, 00:08:47.014 "zone_append": false, 00:08:47.014 "compare": false, 00:08:47.014 "compare_and_write": false, 00:08:47.014 "abort": true, 00:08:47.014 "seek_hole": false, 00:08:47.014 "seek_data": false, 00:08:47.014 "copy": true, 00:08:47.014 "nvme_iov_md": false 00:08:47.014 }, 00:08:47.014 "memory_domains": [ 00:08:47.014 { 00:08:47.014 "dma_device_id": "system", 00:08:47.014 "dma_device_type": 1 00:08:47.014 }, 00:08:47.014 { 00:08:47.014 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:47.014 "dma_device_type": 2 00:08:47.014 } 00:08:47.014 ], 00:08:47.014 "driver_specific": {} 00:08:47.014 } 00:08:47.014 ] 00:08:47.014 13:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.014 13:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:47.014 13:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:47.014 13:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:47.014 13:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:08:47.014 13:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:47.014 13:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:47.014 13:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:47.014 13:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:47.014 13:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:47.014 13:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:47.014 13:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:47.014 13:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:47.014 13:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:47.014 13:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:47.014 13:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:47.014 13:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.014 13:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.014 13:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.014 13:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:47.014 "name": "Existed_Raid", 00:08:47.014 "uuid": "479cd42b-809d-4046-a9e3-d076ebf9f4d0", 00:08:47.014 "strip_size_kb": 64, 00:08:47.014 "state": "online", 00:08:47.014 "raid_level": "concat", 00:08:47.014 "superblock": false, 00:08:47.015 "num_base_bdevs": 3, 00:08:47.015 "num_base_bdevs_discovered": 3, 00:08:47.015 "num_base_bdevs_operational": 3, 00:08:47.015 "base_bdevs_list": [ 00:08:47.015 { 00:08:47.015 "name": "BaseBdev1", 00:08:47.015 "uuid": "cc82e1a0-b0ca-4ae9-862a-4cd21c1e5ca1", 00:08:47.015 "is_configured": true, 00:08:47.015 "data_offset": 0, 00:08:47.015 "data_size": 65536 00:08:47.015 }, 00:08:47.015 { 00:08:47.015 "name": "BaseBdev2", 00:08:47.015 "uuid": "85b9f901-abe5-487b-84d7-bfe3b9be970a", 00:08:47.015 "is_configured": true, 00:08:47.015 "data_offset": 0, 00:08:47.015 "data_size": 65536 00:08:47.015 }, 00:08:47.015 { 00:08:47.015 "name": "BaseBdev3", 00:08:47.015 "uuid": "32214314-b8e7-41c2-b4d8-4126281093dc", 00:08:47.015 "is_configured": true, 00:08:47.015 "data_offset": 0, 00:08:47.015 "data_size": 65536 00:08:47.015 } 00:08:47.015 ] 00:08:47.015 }' 00:08:47.015 13:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:47.015 13:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.275 13:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:47.275 13:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:47.275 13:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:47.275 13:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:47.275 13:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:47.275 13:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:47.275 13:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:47.275 13:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:47.275 13:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.275 13:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.535 [2024-11-17 13:18:36.502421] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:47.535 13:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.535 13:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:47.535 "name": "Existed_Raid", 00:08:47.535 "aliases": [ 00:08:47.535 "479cd42b-809d-4046-a9e3-d076ebf9f4d0" 00:08:47.535 ], 00:08:47.535 "product_name": "Raid Volume", 00:08:47.535 "block_size": 512, 00:08:47.535 "num_blocks": 196608, 00:08:47.535 "uuid": "479cd42b-809d-4046-a9e3-d076ebf9f4d0", 00:08:47.535 "assigned_rate_limits": { 00:08:47.535 "rw_ios_per_sec": 0, 00:08:47.535 "rw_mbytes_per_sec": 0, 00:08:47.535 "r_mbytes_per_sec": 0, 00:08:47.535 "w_mbytes_per_sec": 0 00:08:47.535 }, 00:08:47.535 "claimed": false, 00:08:47.535 "zoned": false, 00:08:47.535 "supported_io_types": { 00:08:47.535 "read": true, 00:08:47.535 "write": true, 00:08:47.535 "unmap": true, 00:08:47.535 "flush": true, 00:08:47.535 "reset": true, 00:08:47.535 "nvme_admin": false, 00:08:47.535 "nvme_io": false, 00:08:47.535 "nvme_io_md": false, 00:08:47.535 "write_zeroes": true, 00:08:47.535 "zcopy": false, 00:08:47.535 "get_zone_info": false, 00:08:47.535 "zone_management": false, 00:08:47.535 "zone_append": false, 00:08:47.535 "compare": false, 00:08:47.535 "compare_and_write": false, 00:08:47.535 "abort": false, 00:08:47.535 "seek_hole": false, 00:08:47.535 "seek_data": false, 00:08:47.535 "copy": false, 00:08:47.535 "nvme_iov_md": false 00:08:47.535 }, 00:08:47.535 "memory_domains": [ 00:08:47.535 { 00:08:47.535 "dma_device_id": "system", 00:08:47.535 "dma_device_type": 1 00:08:47.535 }, 00:08:47.535 { 00:08:47.535 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:47.535 "dma_device_type": 2 00:08:47.535 }, 00:08:47.535 { 00:08:47.535 "dma_device_id": "system", 00:08:47.535 "dma_device_type": 1 00:08:47.535 }, 00:08:47.535 { 00:08:47.535 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:47.535 "dma_device_type": 2 00:08:47.535 }, 00:08:47.535 { 00:08:47.535 "dma_device_id": "system", 00:08:47.535 "dma_device_type": 1 00:08:47.535 }, 00:08:47.535 { 00:08:47.535 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:47.535 "dma_device_type": 2 00:08:47.535 } 00:08:47.535 ], 00:08:47.535 "driver_specific": { 00:08:47.535 "raid": { 00:08:47.535 "uuid": "479cd42b-809d-4046-a9e3-d076ebf9f4d0", 00:08:47.535 "strip_size_kb": 64, 00:08:47.535 "state": "online", 00:08:47.535 "raid_level": "concat", 00:08:47.535 "superblock": false, 00:08:47.535 "num_base_bdevs": 3, 00:08:47.535 "num_base_bdevs_discovered": 3, 00:08:47.535 "num_base_bdevs_operational": 3, 00:08:47.535 "base_bdevs_list": [ 00:08:47.535 { 00:08:47.535 "name": "BaseBdev1", 00:08:47.535 "uuid": "cc82e1a0-b0ca-4ae9-862a-4cd21c1e5ca1", 00:08:47.535 "is_configured": true, 00:08:47.535 "data_offset": 0, 00:08:47.535 "data_size": 65536 00:08:47.535 }, 00:08:47.535 { 00:08:47.535 "name": "BaseBdev2", 00:08:47.535 "uuid": "85b9f901-abe5-487b-84d7-bfe3b9be970a", 00:08:47.535 "is_configured": true, 00:08:47.535 "data_offset": 0, 00:08:47.535 "data_size": 65536 00:08:47.535 }, 00:08:47.535 { 00:08:47.535 "name": "BaseBdev3", 00:08:47.535 "uuid": "32214314-b8e7-41c2-b4d8-4126281093dc", 00:08:47.535 "is_configured": true, 00:08:47.535 "data_offset": 0, 00:08:47.535 "data_size": 65536 00:08:47.535 } 00:08:47.535 ] 00:08:47.535 } 00:08:47.535 } 00:08:47.535 }' 00:08:47.535 13:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:47.535 13:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:47.535 BaseBdev2 00:08:47.535 BaseBdev3' 00:08:47.535 13:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:47.535 13:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:47.535 13:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:47.535 13:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:47.535 13:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:47.535 13:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.535 13:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.535 13:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.535 13:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:47.535 13:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:47.535 13:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:47.535 13:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:47.535 13:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:47.535 13:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.535 13:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.535 13:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.535 13:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:47.535 13:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:47.535 13:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:47.535 13:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:47.535 13:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:47.535 13:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.535 13:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.535 13:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.794 13:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:47.794 13:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:47.794 13:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:47.794 13:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.794 13:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.794 [2024-11-17 13:18:36.765657] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:47.794 [2024-11-17 13:18:36.765747] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:47.794 [2024-11-17 13:18:36.765823] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:47.794 13:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.794 13:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:47.794 13:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:08:47.794 13:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:47.794 13:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:47.794 13:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:47.794 13:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:08:47.794 13:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:47.794 13:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:47.794 13:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:47.794 13:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:47.794 13:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:47.794 13:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:47.794 13:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:47.794 13:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:47.794 13:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:47.794 13:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:47.794 13:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:47.794 13:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.794 13:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.794 13:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.794 13:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:47.794 "name": "Existed_Raid", 00:08:47.794 "uuid": "479cd42b-809d-4046-a9e3-d076ebf9f4d0", 00:08:47.794 "strip_size_kb": 64, 00:08:47.794 "state": "offline", 00:08:47.794 "raid_level": "concat", 00:08:47.794 "superblock": false, 00:08:47.794 "num_base_bdevs": 3, 00:08:47.794 "num_base_bdevs_discovered": 2, 00:08:47.794 "num_base_bdevs_operational": 2, 00:08:47.794 "base_bdevs_list": [ 00:08:47.794 { 00:08:47.794 "name": null, 00:08:47.794 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:47.794 "is_configured": false, 00:08:47.794 "data_offset": 0, 00:08:47.794 "data_size": 65536 00:08:47.794 }, 00:08:47.794 { 00:08:47.794 "name": "BaseBdev2", 00:08:47.794 "uuid": "85b9f901-abe5-487b-84d7-bfe3b9be970a", 00:08:47.794 "is_configured": true, 00:08:47.794 "data_offset": 0, 00:08:47.794 "data_size": 65536 00:08:47.794 }, 00:08:47.794 { 00:08:47.794 "name": "BaseBdev3", 00:08:47.794 "uuid": "32214314-b8e7-41c2-b4d8-4126281093dc", 00:08:47.794 "is_configured": true, 00:08:47.794 "data_offset": 0, 00:08:47.794 "data_size": 65536 00:08:47.794 } 00:08:47.794 ] 00:08:47.794 }' 00:08:47.794 13:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:47.794 13:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.371 13:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:48.371 13:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:48.372 13:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:48.372 13:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:48.372 13:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.372 13:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.372 13:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.372 13:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:48.372 13:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:48.372 13:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:48.372 13:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.372 13:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.372 [2024-11-17 13:18:37.336567] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:48.372 13:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.372 13:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:48.372 13:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:48.372 13:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:48.372 13:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:48.372 13:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.372 13:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.372 13:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.372 13:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:48.372 13:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:48.372 13:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:48.372 13:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.372 13:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.372 [2024-11-17 13:18:37.490282] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:48.372 [2024-11-17 13:18:37.490333] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:48.372 13:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.372 13:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:48.372 13:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:48.372 13:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:48.372 13:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:48.372 13:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.372 13:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.631 13:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.631 13:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:48.631 13:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:48.631 13:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:48.631 13:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:48.631 13:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:48.631 13:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:48.631 13:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.631 13:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.631 BaseBdev2 00:08:48.631 13:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.631 13:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:48.631 13:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:48.631 13:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:48.631 13:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:48.631 13:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:48.631 13:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:48.631 13:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:48.631 13:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.631 13:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.631 13:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.631 13:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:48.631 13:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.631 13:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.631 [ 00:08:48.631 { 00:08:48.631 "name": "BaseBdev2", 00:08:48.631 "aliases": [ 00:08:48.631 "e0cc2338-180e-48d2-a00f-429c94cf4d85" 00:08:48.631 ], 00:08:48.631 "product_name": "Malloc disk", 00:08:48.631 "block_size": 512, 00:08:48.631 "num_blocks": 65536, 00:08:48.631 "uuid": "e0cc2338-180e-48d2-a00f-429c94cf4d85", 00:08:48.631 "assigned_rate_limits": { 00:08:48.631 "rw_ios_per_sec": 0, 00:08:48.631 "rw_mbytes_per_sec": 0, 00:08:48.631 "r_mbytes_per_sec": 0, 00:08:48.631 "w_mbytes_per_sec": 0 00:08:48.631 }, 00:08:48.631 "claimed": false, 00:08:48.631 "zoned": false, 00:08:48.631 "supported_io_types": { 00:08:48.631 "read": true, 00:08:48.631 "write": true, 00:08:48.631 "unmap": true, 00:08:48.631 "flush": true, 00:08:48.631 "reset": true, 00:08:48.631 "nvme_admin": false, 00:08:48.631 "nvme_io": false, 00:08:48.631 "nvme_io_md": false, 00:08:48.631 "write_zeroes": true, 00:08:48.631 "zcopy": true, 00:08:48.631 "get_zone_info": false, 00:08:48.631 "zone_management": false, 00:08:48.631 "zone_append": false, 00:08:48.631 "compare": false, 00:08:48.631 "compare_and_write": false, 00:08:48.631 "abort": true, 00:08:48.631 "seek_hole": false, 00:08:48.631 "seek_data": false, 00:08:48.631 "copy": true, 00:08:48.631 "nvme_iov_md": false 00:08:48.631 }, 00:08:48.631 "memory_domains": [ 00:08:48.631 { 00:08:48.631 "dma_device_id": "system", 00:08:48.631 "dma_device_type": 1 00:08:48.631 }, 00:08:48.631 { 00:08:48.631 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:48.631 "dma_device_type": 2 00:08:48.631 } 00:08:48.631 ], 00:08:48.631 "driver_specific": {} 00:08:48.631 } 00:08:48.631 ] 00:08:48.632 13:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.632 13:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:48.632 13:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:48.632 13:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:48.632 13:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:48.632 13:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.632 13:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.632 BaseBdev3 00:08:48.632 13:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.632 13:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:48.632 13:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:48.632 13:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:48.632 13:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:48.632 13:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:48.632 13:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:48.632 13:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:48.632 13:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.632 13:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.632 13:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.632 13:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:48.632 13:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.632 13:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.632 [ 00:08:48.632 { 00:08:48.632 "name": "BaseBdev3", 00:08:48.632 "aliases": [ 00:08:48.632 "db468a2f-23cf-4218-8ba9-32c44cb7927e" 00:08:48.632 ], 00:08:48.632 "product_name": "Malloc disk", 00:08:48.632 "block_size": 512, 00:08:48.632 "num_blocks": 65536, 00:08:48.632 "uuid": "db468a2f-23cf-4218-8ba9-32c44cb7927e", 00:08:48.632 "assigned_rate_limits": { 00:08:48.632 "rw_ios_per_sec": 0, 00:08:48.632 "rw_mbytes_per_sec": 0, 00:08:48.632 "r_mbytes_per_sec": 0, 00:08:48.632 "w_mbytes_per_sec": 0 00:08:48.632 }, 00:08:48.632 "claimed": false, 00:08:48.632 "zoned": false, 00:08:48.632 "supported_io_types": { 00:08:48.632 "read": true, 00:08:48.632 "write": true, 00:08:48.632 "unmap": true, 00:08:48.632 "flush": true, 00:08:48.632 "reset": true, 00:08:48.632 "nvme_admin": false, 00:08:48.632 "nvme_io": false, 00:08:48.632 "nvme_io_md": false, 00:08:48.632 "write_zeroes": true, 00:08:48.632 "zcopy": true, 00:08:48.632 "get_zone_info": false, 00:08:48.632 "zone_management": false, 00:08:48.632 "zone_append": false, 00:08:48.632 "compare": false, 00:08:48.632 "compare_and_write": false, 00:08:48.632 "abort": true, 00:08:48.632 "seek_hole": false, 00:08:48.632 "seek_data": false, 00:08:48.632 "copy": true, 00:08:48.632 "nvme_iov_md": false 00:08:48.632 }, 00:08:48.632 "memory_domains": [ 00:08:48.632 { 00:08:48.632 "dma_device_id": "system", 00:08:48.632 "dma_device_type": 1 00:08:48.632 }, 00:08:48.632 { 00:08:48.632 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:48.632 "dma_device_type": 2 00:08:48.632 } 00:08:48.632 ], 00:08:48.632 "driver_specific": {} 00:08:48.632 } 00:08:48.632 ] 00:08:48.632 13:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.632 13:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:48.632 13:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:48.632 13:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:48.632 13:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:48.632 13:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.632 13:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.632 [2024-11-17 13:18:37.799849] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:48.632 [2024-11-17 13:18:37.799948] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:48.632 [2024-11-17 13:18:37.799989] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:48.632 [2024-11-17 13:18:37.801880] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:48.632 13:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.632 13:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:48.632 13:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:48.632 13:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:48.632 13:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:48.632 13:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:48.632 13:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:48.632 13:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:48.632 13:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:48.632 13:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:48.632 13:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:48.632 13:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:48.632 13:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:48.632 13:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.632 13:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.632 13:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.892 13:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:48.892 "name": "Existed_Raid", 00:08:48.892 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:48.892 "strip_size_kb": 64, 00:08:48.892 "state": "configuring", 00:08:48.892 "raid_level": "concat", 00:08:48.892 "superblock": false, 00:08:48.892 "num_base_bdevs": 3, 00:08:48.892 "num_base_bdevs_discovered": 2, 00:08:48.892 "num_base_bdevs_operational": 3, 00:08:48.892 "base_bdevs_list": [ 00:08:48.892 { 00:08:48.892 "name": "BaseBdev1", 00:08:48.892 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:48.892 "is_configured": false, 00:08:48.892 "data_offset": 0, 00:08:48.892 "data_size": 0 00:08:48.892 }, 00:08:48.892 { 00:08:48.892 "name": "BaseBdev2", 00:08:48.892 "uuid": "e0cc2338-180e-48d2-a00f-429c94cf4d85", 00:08:48.892 "is_configured": true, 00:08:48.892 "data_offset": 0, 00:08:48.892 "data_size": 65536 00:08:48.892 }, 00:08:48.892 { 00:08:48.892 "name": "BaseBdev3", 00:08:48.892 "uuid": "db468a2f-23cf-4218-8ba9-32c44cb7927e", 00:08:48.892 "is_configured": true, 00:08:48.892 "data_offset": 0, 00:08:48.892 "data_size": 65536 00:08:48.892 } 00:08:48.892 ] 00:08:48.892 }' 00:08:48.892 13:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:48.892 13:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.151 13:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:49.151 13:18:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.151 13:18:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.151 [2024-11-17 13:18:38.267127] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:49.151 13:18:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.151 13:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:49.151 13:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:49.151 13:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:49.151 13:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:49.151 13:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:49.151 13:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:49.151 13:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:49.151 13:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:49.151 13:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:49.151 13:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:49.151 13:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:49.151 13:18:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.151 13:18:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.151 13:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:49.151 13:18:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.151 13:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:49.151 "name": "Existed_Raid", 00:08:49.151 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:49.151 "strip_size_kb": 64, 00:08:49.151 "state": "configuring", 00:08:49.151 "raid_level": "concat", 00:08:49.151 "superblock": false, 00:08:49.151 "num_base_bdevs": 3, 00:08:49.151 "num_base_bdevs_discovered": 1, 00:08:49.151 "num_base_bdevs_operational": 3, 00:08:49.151 "base_bdevs_list": [ 00:08:49.151 { 00:08:49.151 "name": "BaseBdev1", 00:08:49.151 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:49.151 "is_configured": false, 00:08:49.151 "data_offset": 0, 00:08:49.151 "data_size": 0 00:08:49.151 }, 00:08:49.151 { 00:08:49.151 "name": null, 00:08:49.151 "uuid": "e0cc2338-180e-48d2-a00f-429c94cf4d85", 00:08:49.151 "is_configured": false, 00:08:49.151 "data_offset": 0, 00:08:49.151 "data_size": 65536 00:08:49.151 }, 00:08:49.151 { 00:08:49.151 "name": "BaseBdev3", 00:08:49.151 "uuid": "db468a2f-23cf-4218-8ba9-32c44cb7927e", 00:08:49.151 "is_configured": true, 00:08:49.151 "data_offset": 0, 00:08:49.151 "data_size": 65536 00:08:49.151 } 00:08:49.151 ] 00:08:49.151 }' 00:08:49.151 13:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:49.151 13:18:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.719 13:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:49.719 13:18:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.719 13:18:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.719 13:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:49.719 13:18:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.719 13:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:49.719 13:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:49.719 13:18:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.719 13:18:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.719 [2024-11-17 13:18:38.761383] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:49.719 BaseBdev1 00:08:49.719 13:18:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.719 13:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:49.719 13:18:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:49.719 13:18:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:49.719 13:18:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:49.719 13:18:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:49.719 13:18:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:49.719 13:18:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:49.719 13:18:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.719 13:18:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.719 13:18:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.719 13:18:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:49.719 13:18:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.719 13:18:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.719 [ 00:08:49.719 { 00:08:49.719 "name": "BaseBdev1", 00:08:49.719 "aliases": [ 00:08:49.719 "ad8037bd-fc6d-4bac-9888-36fdef302c0a" 00:08:49.719 ], 00:08:49.719 "product_name": "Malloc disk", 00:08:49.719 "block_size": 512, 00:08:49.719 "num_blocks": 65536, 00:08:49.719 "uuid": "ad8037bd-fc6d-4bac-9888-36fdef302c0a", 00:08:49.719 "assigned_rate_limits": { 00:08:49.719 "rw_ios_per_sec": 0, 00:08:49.719 "rw_mbytes_per_sec": 0, 00:08:49.719 "r_mbytes_per_sec": 0, 00:08:49.719 "w_mbytes_per_sec": 0 00:08:49.719 }, 00:08:49.719 "claimed": true, 00:08:49.719 "claim_type": "exclusive_write", 00:08:49.719 "zoned": false, 00:08:49.719 "supported_io_types": { 00:08:49.719 "read": true, 00:08:49.719 "write": true, 00:08:49.719 "unmap": true, 00:08:49.719 "flush": true, 00:08:49.719 "reset": true, 00:08:49.719 "nvme_admin": false, 00:08:49.719 "nvme_io": false, 00:08:49.719 "nvme_io_md": false, 00:08:49.719 "write_zeroes": true, 00:08:49.719 "zcopy": true, 00:08:49.719 "get_zone_info": false, 00:08:49.719 "zone_management": false, 00:08:49.719 "zone_append": false, 00:08:49.719 "compare": false, 00:08:49.719 "compare_and_write": false, 00:08:49.719 "abort": true, 00:08:49.719 "seek_hole": false, 00:08:49.719 "seek_data": false, 00:08:49.719 "copy": true, 00:08:49.719 "nvme_iov_md": false 00:08:49.719 }, 00:08:49.719 "memory_domains": [ 00:08:49.719 { 00:08:49.719 "dma_device_id": "system", 00:08:49.719 "dma_device_type": 1 00:08:49.719 }, 00:08:49.719 { 00:08:49.719 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:49.719 "dma_device_type": 2 00:08:49.719 } 00:08:49.719 ], 00:08:49.719 "driver_specific": {} 00:08:49.719 } 00:08:49.719 ] 00:08:49.719 13:18:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.719 13:18:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:49.719 13:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:49.719 13:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:49.719 13:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:49.719 13:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:49.719 13:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:49.719 13:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:49.719 13:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:49.719 13:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:49.719 13:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:49.719 13:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:49.719 13:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:49.719 13:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:49.719 13:18:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.719 13:18:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.719 13:18:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.719 13:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:49.719 "name": "Existed_Raid", 00:08:49.719 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:49.719 "strip_size_kb": 64, 00:08:49.719 "state": "configuring", 00:08:49.719 "raid_level": "concat", 00:08:49.719 "superblock": false, 00:08:49.719 "num_base_bdevs": 3, 00:08:49.719 "num_base_bdevs_discovered": 2, 00:08:49.719 "num_base_bdevs_operational": 3, 00:08:49.719 "base_bdevs_list": [ 00:08:49.719 { 00:08:49.720 "name": "BaseBdev1", 00:08:49.720 "uuid": "ad8037bd-fc6d-4bac-9888-36fdef302c0a", 00:08:49.720 "is_configured": true, 00:08:49.720 "data_offset": 0, 00:08:49.720 "data_size": 65536 00:08:49.720 }, 00:08:49.720 { 00:08:49.720 "name": null, 00:08:49.720 "uuid": "e0cc2338-180e-48d2-a00f-429c94cf4d85", 00:08:49.720 "is_configured": false, 00:08:49.720 "data_offset": 0, 00:08:49.720 "data_size": 65536 00:08:49.720 }, 00:08:49.720 { 00:08:49.720 "name": "BaseBdev3", 00:08:49.720 "uuid": "db468a2f-23cf-4218-8ba9-32c44cb7927e", 00:08:49.720 "is_configured": true, 00:08:49.720 "data_offset": 0, 00:08:49.720 "data_size": 65536 00:08:49.720 } 00:08:49.720 ] 00:08:49.720 }' 00:08:49.720 13:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:49.720 13:18:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.286 13:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:50.286 13:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.286 13:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.286 13:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:50.286 13:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.286 13:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:50.286 13:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:50.286 13:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.286 13:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.286 [2024-11-17 13:18:39.240673] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:50.286 13:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.286 13:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:50.286 13:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:50.286 13:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:50.286 13:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:50.286 13:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:50.286 13:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:50.286 13:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:50.286 13:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:50.286 13:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:50.286 13:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:50.286 13:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:50.286 13:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:50.286 13:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.286 13:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.286 13:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.286 13:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:50.286 "name": "Existed_Raid", 00:08:50.286 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:50.286 "strip_size_kb": 64, 00:08:50.286 "state": "configuring", 00:08:50.286 "raid_level": "concat", 00:08:50.286 "superblock": false, 00:08:50.286 "num_base_bdevs": 3, 00:08:50.286 "num_base_bdevs_discovered": 1, 00:08:50.286 "num_base_bdevs_operational": 3, 00:08:50.286 "base_bdevs_list": [ 00:08:50.286 { 00:08:50.286 "name": "BaseBdev1", 00:08:50.286 "uuid": "ad8037bd-fc6d-4bac-9888-36fdef302c0a", 00:08:50.286 "is_configured": true, 00:08:50.286 "data_offset": 0, 00:08:50.286 "data_size": 65536 00:08:50.286 }, 00:08:50.286 { 00:08:50.286 "name": null, 00:08:50.286 "uuid": "e0cc2338-180e-48d2-a00f-429c94cf4d85", 00:08:50.286 "is_configured": false, 00:08:50.286 "data_offset": 0, 00:08:50.286 "data_size": 65536 00:08:50.286 }, 00:08:50.286 { 00:08:50.286 "name": null, 00:08:50.286 "uuid": "db468a2f-23cf-4218-8ba9-32c44cb7927e", 00:08:50.286 "is_configured": false, 00:08:50.286 "data_offset": 0, 00:08:50.286 "data_size": 65536 00:08:50.286 } 00:08:50.286 ] 00:08:50.286 }' 00:08:50.286 13:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:50.286 13:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.545 13:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:50.545 13:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:50.545 13:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.545 13:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.545 13:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.545 13:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:50.545 13:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:50.545 13:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.545 13:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.545 [2024-11-17 13:18:39.695892] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:50.545 13:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.545 13:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:50.545 13:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:50.545 13:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:50.545 13:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:50.545 13:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:50.545 13:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:50.545 13:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:50.545 13:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:50.545 13:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:50.545 13:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:50.545 13:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:50.545 13:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:50.545 13:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.545 13:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.545 13:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.545 13:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:50.545 "name": "Existed_Raid", 00:08:50.545 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:50.545 "strip_size_kb": 64, 00:08:50.545 "state": "configuring", 00:08:50.545 "raid_level": "concat", 00:08:50.545 "superblock": false, 00:08:50.545 "num_base_bdevs": 3, 00:08:50.545 "num_base_bdevs_discovered": 2, 00:08:50.545 "num_base_bdevs_operational": 3, 00:08:50.545 "base_bdevs_list": [ 00:08:50.545 { 00:08:50.545 "name": "BaseBdev1", 00:08:50.545 "uuid": "ad8037bd-fc6d-4bac-9888-36fdef302c0a", 00:08:50.545 "is_configured": true, 00:08:50.545 "data_offset": 0, 00:08:50.545 "data_size": 65536 00:08:50.545 }, 00:08:50.545 { 00:08:50.545 "name": null, 00:08:50.545 "uuid": "e0cc2338-180e-48d2-a00f-429c94cf4d85", 00:08:50.545 "is_configured": false, 00:08:50.545 "data_offset": 0, 00:08:50.545 "data_size": 65536 00:08:50.545 }, 00:08:50.545 { 00:08:50.545 "name": "BaseBdev3", 00:08:50.545 "uuid": "db468a2f-23cf-4218-8ba9-32c44cb7927e", 00:08:50.545 "is_configured": true, 00:08:50.545 "data_offset": 0, 00:08:50.545 "data_size": 65536 00:08:50.545 } 00:08:50.545 ] 00:08:50.545 }' 00:08:50.545 13:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:50.545 13:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.113 13:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:51.113 13:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:51.113 13:18:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.113 13:18:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.113 13:18:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.113 13:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:51.113 13:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:51.113 13:18:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.113 13:18:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.113 [2024-11-17 13:18:40.139180] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:51.113 13:18:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.113 13:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:51.113 13:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:51.113 13:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:51.113 13:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:51.113 13:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:51.113 13:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:51.113 13:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:51.113 13:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:51.113 13:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:51.113 13:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:51.113 13:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:51.113 13:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:51.113 13:18:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.113 13:18:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.113 13:18:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.113 13:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:51.113 "name": "Existed_Raid", 00:08:51.113 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:51.113 "strip_size_kb": 64, 00:08:51.113 "state": "configuring", 00:08:51.113 "raid_level": "concat", 00:08:51.113 "superblock": false, 00:08:51.113 "num_base_bdevs": 3, 00:08:51.113 "num_base_bdevs_discovered": 1, 00:08:51.113 "num_base_bdevs_operational": 3, 00:08:51.113 "base_bdevs_list": [ 00:08:51.113 { 00:08:51.113 "name": null, 00:08:51.113 "uuid": "ad8037bd-fc6d-4bac-9888-36fdef302c0a", 00:08:51.113 "is_configured": false, 00:08:51.113 "data_offset": 0, 00:08:51.113 "data_size": 65536 00:08:51.113 }, 00:08:51.113 { 00:08:51.113 "name": null, 00:08:51.113 "uuid": "e0cc2338-180e-48d2-a00f-429c94cf4d85", 00:08:51.113 "is_configured": false, 00:08:51.113 "data_offset": 0, 00:08:51.113 "data_size": 65536 00:08:51.113 }, 00:08:51.113 { 00:08:51.113 "name": "BaseBdev3", 00:08:51.113 "uuid": "db468a2f-23cf-4218-8ba9-32c44cb7927e", 00:08:51.113 "is_configured": true, 00:08:51.113 "data_offset": 0, 00:08:51.113 "data_size": 65536 00:08:51.113 } 00:08:51.113 ] 00:08:51.113 }' 00:08:51.113 13:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:51.114 13:18:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.682 13:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:51.682 13:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:51.682 13:18:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.682 13:18:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.682 13:18:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.682 13:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:51.682 13:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:51.682 13:18:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.682 13:18:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.682 [2024-11-17 13:18:40.696745] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:51.682 13:18:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.682 13:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:51.682 13:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:51.682 13:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:51.682 13:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:51.682 13:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:51.682 13:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:51.683 13:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:51.683 13:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:51.683 13:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:51.683 13:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:51.683 13:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:51.683 13:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:51.683 13:18:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.683 13:18:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.683 13:18:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.683 13:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:51.683 "name": "Existed_Raid", 00:08:51.683 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:51.683 "strip_size_kb": 64, 00:08:51.683 "state": "configuring", 00:08:51.683 "raid_level": "concat", 00:08:51.683 "superblock": false, 00:08:51.683 "num_base_bdevs": 3, 00:08:51.683 "num_base_bdevs_discovered": 2, 00:08:51.683 "num_base_bdevs_operational": 3, 00:08:51.683 "base_bdevs_list": [ 00:08:51.683 { 00:08:51.683 "name": null, 00:08:51.683 "uuid": "ad8037bd-fc6d-4bac-9888-36fdef302c0a", 00:08:51.683 "is_configured": false, 00:08:51.683 "data_offset": 0, 00:08:51.683 "data_size": 65536 00:08:51.683 }, 00:08:51.683 { 00:08:51.683 "name": "BaseBdev2", 00:08:51.683 "uuid": "e0cc2338-180e-48d2-a00f-429c94cf4d85", 00:08:51.683 "is_configured": true, 00:08:51.683 "data_offset": 0, 00:08:51.683 "data_size": 65536 00:08:51.683 }, 00:08:51.683 { 00:08:51.683 "name": "BaseBdev3", 00:08:51.683 "uuid": "db468a2f-23cf-4218-8ba9-32c44cb7927e", 00:08:51.683 "is_configured": true, 00:08:51.683 "data_offset": 0, 00:08:51.683 "data_size": 65536 00:08:51.683 } 00:08:51.683 ] 00:08:51.683 }' 00:08:51.683 13:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:51.683 13:18:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.943 13:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:51.943 13:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.943 13:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.943 13:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:51.943 13:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.211 13:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:08:52.211 13:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:52.211 13:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:52.211 13:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.211 13:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.211 13:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.211 13:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u ad8037bd-fc6d-4bac-9888-36fdef302c0a 00:08:52.211 13:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.211 13:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.211 [2024-11-17 13:18:41.249087] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:52.211 [2024-11-17 13:18:41.249197] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:52.211 [2024-11-17 13:18:41.249243] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:52.211 [2024-11-17 13:18:41.249573] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:52.211 [2024-11-17 13:18:41.249792] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:52.211 [2024-11-17 13:18:41.249835] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:08:52.211 [2024-11-17 13:18:41.250120] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:52.211 NewBaseBdev 00:08:52.211 13:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.211 13:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:08:52.211 13:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:08:52.211 13:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:52.211 13:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:52.211 13:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:52.211 13:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:52.211 13:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:52.211 13:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.211 13:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.211 13:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.211 13:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:52.211 13:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.211 13:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.211 [ 00:08:52.211 { 00:08:52.211 "name": "NewBaseBdev", 00:08:52.211 "aliases": [ 00:08:52.211 "ad8037bd-fc6d-4bac-9888-36fdef302c0a" 00:08:52.211 ], 00:08:52.211 "product_name": "Malloc disk", 00:08:52.211 "block_size": 512, 00:08:52.211 "num_blocks": 65536, 00:08:52.211 "uuid": "ad8037bd-fc6d-4bac-9888-36fdef302c0a", 00:08:52.211 "assigned_rate_limits": { 00:08:52.211 "rw_ios_per_sec": 0, 00:08:52.211 "rw_mbytes_per_sec": 0, 00:08:52.211 "r_mbytes_per_sec": 0, 00:08:52.211 "w_mbytes_per_sec": 0 00:08:52.211 }, 00:08:52.211 "claimed": true, 00:08:52.211 "claim_type": "exclusive_write", 00:08:52.211 "zoned": false, 00:08:52.211 "supported_io_types": { 00:08:52.211 "read": true, 00:08:52.211 "write": true, 00:08:52.211 "unmap": true, 00:08:52.211 "flush": true, 00:08:52.211 "reset": true, 00:08:52.211 "nvme_admin": false, 00:08:52.211 "nvme_io": false, 00:08:52.211 "nvme_io_md": false, 00:08:52.211 "write_zeroes": true, 00:08:52.211 "zcopy": true, 00:08:52.211 "get_zone_info": false, 00:08:52.211 "zone_management": false, 00:08:52.211 "zone_append": false, 00:08:52.211 "compare": false, 00:08:52.211 "compare_and_write": false, 00:08:52.211 "abort": true, 00:08:52.211 "seek_hole": false, 00:08:52.211 "seek_data": false, 00:08:52.211 "copy": true, 00:08:52.211 "nvme_iov_md": false 00:08:52.211 }, 00:08:52.211 "memory_domains": [ 00:08:52.211 { 00:08:52.211 "dma_device_id": "system", 00:08:52.212 "dma_device_type": 1 00:08:52.212 }, 00:08:52.212 { 00:08:52.212 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:52.212 "dma_device_type": 2 00:08:52.212 } 00:08:52.212 ], 00:08:52.212 "driver_specific": {} 00:08:52.212 } 00:08:52.212 ] 00:08:52.212 13:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.212 13:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:52.212 13:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:08:52.212 13:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:52.212 13:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:52.212 13:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:52.212 13:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:52.212 13:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:52.212 13:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:52.212 13:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:52.212 13:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:52.212 13:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:52.212 13:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:52.212 13:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:52.212 13:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.212 13:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.212 13:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.212 13:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:52.212 "name": "Existed_Raid", 00:08:52.212 "uuid": "19af3716-940e-4fa3-9419-4d319d0676be", 00:08:52.212 "strip_size_kb": 64, 00:08:52.212 "state": "online", 00:08:52.212 "raid_level": "concat", 00:08:52.212 "superblock": false, 00:08:52.212 "num_base_bdevs": 3, 00:08:52.212 "num_base_bdevs_discovered": 3, 00:08:52.212 "num_base_bdevs_operational": 3, 00:08:52.212 "base_bdevs_list": [ 00:08:52.212 { 00:08:52.212 "name": "NewBaseBdev", 00:08:52.212 "uuid": "ad8037bd-fc6d-4bac-9888-36fdef302c0a", 00:08:52.212 "is_configured": true, 00:08:52.212 "data_offset": 0, 00:08:52.212 "data_size": 65536 00:08:52.212 }, 00:08:52.212 { 00:08:52.212 "name": "BaseBdev2", 00:08:52.212 "uuid": "e0cc2338-180e-48d2-a00f-429c94cf4d85", 00:08:52.212 "is_configured": true, 00:08:52.212 "data_offset": 0, 00:08:52.212 "data_size": 65536 00:08:52.212 }, 00:08:52.212 { 00:08:52.212 "name": "BaseBdev3", 00:08:52.212 "uuid": "db468a2f-23cf-4218-8ba9-32c44cb7927e", 00:08:52.212 "is_configured": true, 00:08:52.212 "data_offset": 0, 00:08:52.212 "data_size": 65536 00:08:52.212 } 00:08:52.212 ] 00:08:52.212 }' 00:08:52.212 13:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:52.212 13:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.486 13:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:08:52.486 13:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:52.486 13:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:52.486 13:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:52.486 13:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:52.486 13:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:52.486 13:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:52.486 13:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:52.486 13:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.486 13:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.486 [2024-11-17 13:18:41.704744] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:52.746 13:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.746 13:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:52.746 "name": "Existed_Raid", 00:08:52.746 "aliases": [ 00:08:52.746 "19af3716-940e-4fa3-9419-4d319d0676be" 00:08:52.746 ], 00:08:52.746 "product_name": "Raid Volume", 00:08:52.746 "block_size": 512, 00:08:52.746 "num_blocks": 196608, 00:08:52.746 "uuid": "19af3716-940e-4fa3-9419-4d319d0676be", 00:08:52.746 "assigned_rate_limits": { 00:08:52.746 "rw_ios_per_sec": 0, 00:08:52.746 "rw_mbytes_per_sec": 0, 00:08:52.746 "r_mbytes_per_sec": 0, 00:08:52.746 "w_mbytes_per_sec": 0 00:08:52.746 }, 00:08:52.746 "claimed": false, 00:08:52.746 "zoned": false, 00:08:52.746 "supported_io_types": { 00:08:52.746 "read": true, 00:08:52.746 "write": true, 00:08:52.746 "unmap": true, 00:08:52.746 "flush": true, 00:08:52.746 "reset": true, 00:08:52.746 "nvme_admin": false, 00:08:52.746 "nvme_io": false, 00:08:52.746 "nvme_io_md": false, 00:08:52.746 "write_zeroes": true, 00:08:52.746 "zcopy": false, 00:08:52.746 "get_zone_info": false, 00:08:52.746 "zone_management": false, 00:08:52.746 "zone_append": false, 00:08:52.746 "compare": false, 00:08:52.746 "compare_and_write": false, 00:08:52.746 "abort": false, 00:08:52.746 "seek_hole": false, 00:08:52.746 "seek_data": false, 00:08:52.746 "copy": false, 00:08:52.746 "nvme_iov_md": false 00:08:52.746 }, 00:08:52.746 "memory_domains": [ 00:08:52.746 { 00:08:52.746 "dma_device_id": "system", 00:08:52.746 "dma_device_type": 1 00:08:52.746 }, 00:08:52.746 { 00:08:52.746 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:52.746 "dma_device_type": 2 00:08:52.746 }, 00:08:52.746 { 00:08:52.746 "dma_device_id": "system", 00:08:52.746 "dma_device_type": 1 00:08:52.746 }, 00:08:52.746 { 00:08:52.746 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:52.746 "dma_device_type": 2 00:08:52.746 }, 00:08:52.746 { 00:08:52.746 "dma_device_id": "system", 00:08:52.746 "dma_device_type": 1 00:08:52.746 }, 00:08:52.746 { 00:08:52.746 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:52.746 "dma_device_type": 2 00:08:52.746 } 00:08:52.746 ], 00:08:52.746 "driver_specific": { 00:08:52.746 "raid": { 00:08:52.746 "uuid": "19af3716-940e-4fa3-9419-4d319d0676be", 00:08:52.746 "strip_size_kb": 64, 00:08:52.746 "state": "online", 00:08:52.746 "raid_level": "concat", 00:08:52.746 "superblock": false, 00:08:52.746 "num_base_bdevs": 3, 00:08:52.746 "num_base_bdevs_discovered": 3, 00:08:52.746 "num_base_bdevs_operational": 3, 00:08:52.746 "base_bdevs_list": [ 00:08:52.746 { 00:08:52.746 "name": "NewBaseBdev", 00:08:52.746 "uuid": "ad8037bd-fc6d-4bac-9888-36fdef302c0a", 00:08:52.746 "is_configured": true, 00:08:52.746 "data_offset": 0, 00:08:52.746 "data_size": 65536 00:08:52.746 }, 00:08:52.746 { 00:08:52.746 "name": "BaseBdev2", 00:08:52.746 "uuid": "e0cc2338-180e-48d2-a00f-429c94cf4d85", 00:08:52.746 "is_configured": true, 00:08:52.746 "data_offset": 0, 00:08:52.746 "data_size": 65536 00:08:52.746 }, 00:08:52.746 { 00:08:52.746 "name": "BaseBdev3", 00:08:52.746 "uuid": "db468a2f-23cf-4218-8ba9-32c44cb7927e", 00:08:52.746 "is_configured": true, 00:08:52.746 "data_offset": 0, 00:08:52.746 "data_size": 65536 00:08:52.746 } 00:08:52.746 ] 00:08:52.746 } 00:08:52.746 } 00:08:52.746 }' 00:08:52.746 13:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:52.746 13:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:08:52.746 BaseBdev2 00:08:52.746 BaseBdev3' 00:08:52.746 13:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:52.746 13:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:52.746 13:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:52.746 13:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:08:52.747 13:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.747 13:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.747 13:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:52.747 13:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.747 13:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:52.747 13:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:52.747 13:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:52.747 13:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:52.747 13:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.747 13:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.747 13:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:52.747 13:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.747 13:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:52.747 13:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:52.747 13:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:52.747 13:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:52.747 13:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.747 13:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.747 13:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:52.747 13:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.747 13:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:52.747 13:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:52.747 13:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:52.747 13:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.747 13:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.747 [2024-11-17 13:18:41.916042] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:52.747 [2024-11-17 13:18:41.916069] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:52.747 [2024-11-17 13:18:41.916140] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:52.747 [2024-11-17 13:18:41.916193] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:52.747 [2024-11-17 13:18:41.916206] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:08:52.747 13:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.747 13:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 65543 00:08:52.747 13:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 65543 ']' 00:08:52.747 13:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 65543 00:08:52.747 13:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:08:52.747 13:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:52.747 13:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65543 00:08:52.747 13:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:52.747 13:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:52.747 13:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65543' 00:08:52.747 killing process with pid 65543 00:08:52.747 13:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 65543 00:08:52.747 [2024-11-17 13:18:41.950778] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:52.747 13:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 65543 00:08:53.316 [2024-11-17 13:18:42.255701] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:54.254 13:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:08:54.254 00:08:54.254 real 0m10.288s 00:08:54.254 user 0m16.329s 00:08:54.254 sys 0m1.791s 00:08:54.254 ************************************ 00:08:54.254 END TEST raid_state_function_test 00:08:54.254 ************************************ 00:08:54.254 13:18:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:54.254 13:18:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.254 13:18:43 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:08:54.254 13:18:43 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:54.254 13:18:43 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:54.254 13:18:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:54.254 ************************************ 00:08:54.254 START TEST raid_state_function_test_sb 00:08:54.254 ************************************ 00:08:54.254 13:18:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 3 true 00:08:54.254 13:18:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:08:54.254 13:18:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:54.254 13:18:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:08:54.254 13:18:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:54.254 13:18:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:54.254 13:18:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:54.254 13:18:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:54.254 13:18:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:54.254 13:18:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:54.254 13:18:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:54.254 13:18:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:54.254 13:18:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:54.254 13:18:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:54.254 13:18:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:54.254 13:18:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:54.254 13:18:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:54.254 13:18:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:54.254 13:18:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:54.254 13:18:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:54.254 13:18:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:54.254 13:18:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:54.254 13:18:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:08:54.254 13:18:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:54.254 13:18:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:54.254 13:18:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:08:54.254 13:18:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:08:54.254 13:18:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=66164 00:08:54.254 13:18:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:54.254 13:18:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 66164' 00:08:54.254 Process raid pid: 66164 00:08:54.254 13:18:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 66164 00:08:54.254 13:18:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 66164 ']' 00:08:54.254 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:54.254 13:18:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:54.254 13:18:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:54.254 13:18:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:54.254 13:18:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:54.254 13:18:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.513 [2024-11-17 13:18:43.522066] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:08:54.514 [2024-11-17 13:18:43.522183] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:54.514 [2024-11-17 13:18:43.698442] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:54.772 [2024-11-17 13:18:43.811412] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:55.032 [2024-11-17 13:18:44.008094] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:55.032 [2024-11-17 13:18:44.008134] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:55.291 13:18:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:55.291 13:18:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:08:55.291 13:18:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:55.291 13:18:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.291 13:18:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.291 [2024-11-17 13:18:44.351815] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:55.291 [2024-11-17 13:18:44.351936] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:55.291 [2024-11-17 13:18:44.351967] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:55.291 [2024-11-17 13:18:44.351978] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:55.291 [2024-11-17 13:18:44.351984] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:55.291 [2024-11-17 13:18:44.351993] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:55.291 13:18:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.291 13:18:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:55.291 13:18:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:55.292 13:18:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:55.292 13:18:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:55.292 13:18:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:55.292 13:18:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:55.292 13:18:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:55.292 13:18:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:55.292 13:18:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:55.292 13:18:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:55.292 13:18:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:55.292 13:18:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:55.292 13:18:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.292 13:18:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.292 13:18:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.292 13:18:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:55.292 "name": "Existed_Raid", 00:08:55.292 "uuid": "e2e93ba0-82a3-472b-8e78-d608edd06f08", 00:08:55.292 "strip_size_kb": 64, 00:08:55.292 "state": "configuring", 00:08:55.292 "raid_level": "concat", 00:08:55.292 "superblock": true, 00:08:55.292 "num_base_bdevs": 3, 00:08:55.292 "num_base_bdevs_discovered": 0, 00:08:55.292 "num_base_bdevs_operational": 3, 00:08:55.292 "base_bdevs_list": [ 00:08:55.292 { 00:08:55.292 "name": "BaseBdev1", 00:08:55.292 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:55.292 "is_configured": false, 00:08:55.292 "data_offset": 0, 00:08:55.292 "data_size": 0 00:08:55.292 }, 00:08:55.292 { 00:08:55.292 "name": "BaseBdev2", 00:08:55.292 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:55.292 "is_configured": false, 00:08:55.292 "data_offset": 0, 00:08:55.292 "data_size": 0 00:08:55.292 }, 00:08:55.292 { 00:08:55.292 "name": "BaseBdev3", 00:08:55.292 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:55.292 "is_configured": false, 00:08:55.292 "data_offset": 0, 00:08:55.292 "data_size": 0 00:08:55.292 } 00:08:55.292 ] 00:08:55.292 }' 00:08:55.292 13:18:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:55.292 13:18:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.551 13:18:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:55.551 13:18:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.552 13:18:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.552 [2024-11-17 13:18:44.751072] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:55.552 [2024-11-17 13:18:44.751165] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:55.552 13:18:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.552 13:18:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:55.552 13:18:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.552 13:18:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.552 [2024-11-17 13:18:44.763056] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:55.552 [2024-11-17 13:18:44.763154] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:55.552 [2024-11-17 13:18:44.763183] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:55.552 [2024-11-17 13:18:44.763206] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:55.552 [2024-11-17 13:18:44.763235] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:55.552 [2024-11-17 13:18:44.763257] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:55.552 13:18:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.552 13:18:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:55.552 13:18:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.552 13:18:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.812 [2024-11-17 13:18:44.809529] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:55.812 BaseBdev1 00:08:55.812 13:18:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.812 13:18:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:55.812 13:18:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:55.812 13:18:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:55.812 13:18:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:55.812 13:18:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:55.812 13:18:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:55.812 13:18:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:55.812 13:18:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.812 13:18:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.812 13:18:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.812 13:18:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:55.812 13:18:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.812 13:18:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.812 [ 00:08:55.812 { 00:08:55.812 "name": "BaseBdev1", 00:08:55.812 "aliases": [ 00:08:55.812 "b1d3592c-988e-4354-bc5c-c92f46d7d7fc" 00:08:55.812 ], 00:08:55.812 "product_name": "Malloc disk", 00:08:55.812 "block_size": 512, 00:08:55.812 "num_blocks": 65536, 00:08:55.812 "uuid": "b1d3592c-988e-4354-bc5c-c92f46d7d7fc", 00:08:55.812 "assigned_rate_limits": { 00:08:55.812 "rw_ios_per_sec": 0, 00:08:55.812 "rw_mbytes_per_sec": 0, 00:08:55.812 "r_mbytes_per_sec": 0, 00:08:55.812 "w_mbytes_per_sec": 0 00:08:55.812 }, 00:08:55.812 "claimed": true, 00:08:55.812 "claim_type": "exclusive_write", 00:08:55.812 "zoned": false, 00:08:55.812 "supported_io_types": { 00:08:55.812 "read": true, 00:08:55.812 "write": true, 00:08:55.812 "unmap": true, 00:08:55.812 "flush": true, 00:08:55.812 "reset": true, 00:08:55.812 "nvme_admin": false, 00:08:55.812 "nvme_io": false, 00:08:55.812 "nvme_io_md": false, 00:08:55.812 "write_zeroes": true, 00:08:55.812 "zcopy": true, 00:08:55.812 "get_zone_info": false, 00:08:55.812 "zone_management": false, 00:08:55.812 "zone_append": false, 00:08:55.812 "compare": false, 00:08:55.812 "compare_and_write": false, 00:08:55.812 "abort": true, 00:08:55.812 "seek_hole": false, 00:08:55.812 "seek_data": false, 00:08:55.812 "copy": true, 00:08:55.812 "nvme_iov_md": false 00:08:55.812 }, 00:08:55.812 "memory_domains": [ 00:08:55.812 { 00:08:55.812 "dma_device_id": "system", 00:08:55.812 "dma_device_type": 1 00:08:55.812 }, 00:08:55.812 { 00:08:55.812 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:55.812 "dma_device_type": 2 00:08:55.813 } 00:08:55.813 ], 00:08:55.813 "driver_specific": {} 00:08:55.813 } 00:08:55.813 ] 00:08:55.813 13:18:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.813 13:18:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:55.813 13:18:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:55.813 13:18:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:55.813 13:18:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:55.813 13:18:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:55.813 13:18:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:55.813 13:18:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:55.813 13:18:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:55.813 13:18:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:55.813 13:18:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:55.813 13:18:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:55.813 13:18:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:55.813 13:18:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:55.813 13:18:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.813 13:18:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.813 13:18:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.813 13:18:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:55.813 "name": "Existed_Raid", 00:08:55.813 "uuid": "84373547-d5dc-44d5-a942-4e03c80384f8", 00:08:55.813 "strip_size_kb": 64, 00:08:55.813 "state": "configuring", 00:08:55.813 "raid_level": "concat", 00:08:55.813 "superblock": true, 00:08:55.813 "num_base_bdevs": 3, 00:08:55.813 "num_base_bdevs_discovered": 1, 00:08:55.813 "num_base_bdevs_operational": 3, 00:08:55.813 "base_bdevs_list": [ 00:08:55.813 { 00:08:55.813 "name": "BaseBdev1", 00:08:55.813 "uuid": "b1d3592c-988e-4354-bc5c-c92f46d7d7fc", 00:08:55.813 "is_configured": true, 00:08:55.813 "data_offset": 2048, 00:08:55.813 "data_size": 63488 00:08:55.813 }, 00:08:55.813 { 00:08:55.813 "name": "BaseBdev2", 00:08:55.813 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:55.813 "is_configured": false, 00:08:55.813 "data_offset": 0, 00:08:55.813 "data_size": 0 00:08:55.813 }, 00:08:55.813 { 00:08:55.813 "name": "BaseBdev3", 00:08:55.813 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:55.813 "is_configured": false, 00:08:55.813 "data_offset": 0, 00:08:55.813 "data_size": 0 00:08:55.813 } 00:08:55.813 ] 00:08:55.813 }' 00:08:55.813 13:18:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:55.813 13:18:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.073 13:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:56.073 13:18:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.073 13:18:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.073 [2024-11-17 13:18:45.280795] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:56.073 [2024-11-17 13:18:45.280903] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:56.073 13:18:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.073 13:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:56.073 13:18:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.073 13:18:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.073 [2024-11-17 13:18:45.288828] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:56.073 [2024-11-17 13:18:45.290735] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:56.073 [2024-11-17 13:18:45.290813] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:56.073 [2024-11-17 13:18:45.290841] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:56.073 [2024-11-17 13:18:45.290863] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:56.073 13:18:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.073 13:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:56.073 13:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:56.073 13:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:56.073 13:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:56.073 13:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:56.073 13:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:56.073 13:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:56.333 13:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:56.333 13:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:56.333 13:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:56.333 13:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:56.333 13:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:56.333 13:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:56.333 13:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:56.333 13:18:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.333 13:18:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.333 13:18:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.333 13:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:56.333 "name": "Existed_Raid", 00:08:56.333 "uuid": "62c5b8d5-9e0e-41ae-95ea-783a9ab4343e", 00:08:56.333 "strip_size_kb": 64, 00:08:56.333 "state": "configuring", 00:08:56.333 "raid_level": "concat", 00:08:56.333 "superblock": true, 00:08:56.333 "num_base_bdevs": 3, 00:08:56.333 "num_base_bdevs_discovered": 1, 00:08:56.333 "num_base_bdevs_operational": 3, 00:08:56.333 "base_bdevs_list": [ 00:08:56.333 { 00:08:56.333 "name": "BaseBdev1", 00:08:56.333 "uuid": "b1d3592c-988e-4354-bc5c-c92f46d7d7fc", 00:08:56.333 "is_configured": true, 00:08:56.333 "data_offset": 2048, 00:08:56.333 "data_size": 63488 00:08:56.333 }, 00:08:56.333 { 00:08:56.333 "name": "BaseBdev2", 00:08:56.333 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:56.333 "is_configured": false, 00:08:56.333 "data_offset": 0, 00:08:56.333 "data_size": 0 00:08:56.333 }, 00:08:56.333 { 00:08:56.333 "name": "BaseBdev3", 00:08:56.333 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:56.333 "is_configured": false, 00:08:56.333 "data_offset": 0, 00:08:56.333 "data_size": 0 00:08:56.333 } 00:08:56.333 ] 00:08:56.333 }' 00:08:56.333 13:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:56.333 13:18:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.593 13:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:56.593 13:18:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.593 13:18:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.593 [2024-11-17 13:18:45.725011] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:56.593 BaseBdev2 00:08:56.593 13:18:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.593 13:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:56.594 13:18:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:56.594 13:18:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:56.594 13:18:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:56.594 13:18:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:56.594 13:18:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:56.594 13:18:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:56.594 13:18:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.594 13:18:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.594 13:18:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.594 13:18:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:56.594 13:18:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.594 13:18:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.594 [ 00:08:56.594 { 00:08:56.594 "name": "BaseBdev2", 00:08:56.594 "aliases": [ 00:08:56.594 "5e2c2a2d-ddc1-4b68-ace0-f8e08f3ea84e" 00:08:56.594 ], 00:08:56.594 "product_name": "Malloc disk", 00:08:56.594 "block_size": 512, 00:08:56.594 "num_blocks": 65536, 00:08:56.594 "uuid": "5e2c2a2d-ddc1-4b68-ace0-f8e08f3ea84e", 00:08:56.594 "assigned_rate_limits": { 00:08:56.594 "rw_ios_per_sec": 0, 00:08:56.594 "rw_mbytes_per_sec": 0, 00:08:56.594 "r_mbytes_per_sec": 0, 00:08:56.594 "w_mbytes_per_sec": 0 00:08:56.594 }, 00:08:56.594 "claimed": true, 00:08:56.594 "claim_type": "exclusive_write", 00:08:56.594 "zoned": false, 00:08:56.594 "supported_io_types": { 00:08:56.594 "read": true, 00:08:56.594 "write": true, 00:08:56.594 "unmap": true, 00:08:56.594 "flush": true, 00:08:56.594 "reset": true, 00:08:56.594 "nvme_admin": false, 00:08:56.594 "nvme_io": false, 00:08:56.594 "nvme_io_md": false, 00:08:56.594 "write_zeroes": true, 00:08:56.594 "zcopy": true, 00:08:56.594 "get_zone_info": false, 00:08:56.594 "zone_management": false, 00:08:56.594 "zone_append": false, 00:08:56.594 "compare": false, 00:08:56.594 "compare_and_write": false, 00:08:56.594 "abort": true, 00:08:56.594 "seek_hole": false, 00:08:56.594 "seek_data": false, 00:08:56.594 "copy": true, 00:08:56.594 "nvme_iov_md": false 00:08:56.594 }, 00:08:56.594 "memory_domains": [ 00:08:56.594 { 00:08:56.594 "dma_device_id": "system", 00:08:56.594 "dma_device_type": 1 00:08:56.594 }, 00:08:56.594 { 00:08:56.594 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:56.594 "dma_device_type": 2 00:08:56.594 } 00:08:56.594 ], 00:08:56.594 "driver_specific": {} 00:08:56.594 } 00:08:56.594 ] 00:08:56.594 13:18:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.594 13:18:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:56.594 13:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:56.594 13:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:56.594 13:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:56.594 13:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:56.594 13:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:56.594 13:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:56.594 13:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:56.594 13:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:56.594 13:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:56.594 13:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:56.594 13:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:56.594 13:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:56.594 13:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:56.594 13:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:56.594 13:18:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.594 13:18:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.594 13:18:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.594 13:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:56.594 "name": "Existed_Raid", 00:08:56.594 "uuid": "62c5b8d5-9e0e-41ae-95ea-783a9ab4343e", 00:08:56.594 "strip_size_kb": 64, 00:08:56.594 "state": "configuring", 00:08:56.594 "raid_level": "concat", 00:08:56.594 "superblock": true, 00:08:56.594 "num_base_bdevs": 3, 00:08:56.594 "num_base_bdevs_discovered": 2, 00:08:56.594 "num_base_bdevs_operational": 3, 00:08:56.594 "base_bdevs_list": [ 00:08:56.594 { 00:08:56.594 "name": "BaseBdev1", 00:08:56.594 "uuid": "b1d3592c-988e-4354-bc5c-c92f46d7d7fc", 00:08:56.594 "is_configured": true, 00:08:56.594 "data_offset": 2048, 00:08:56.594 "data_size": 63488 00:08:56.594 }, 00:08:56.594 { 00:08:56.594 "name": "BaseBdev2", 00:08:56.594 "uuid": "5e2c2a2d-ddc1-4b68-ace0-f8e08f3ea84e", 00:08:56.594 "is_configured": true, 00:08:56.594 "data_offset": 2048, 00:08:56.594 "data_size": 63488 00:08:56.594 }, 00:08:56.594 { 00:08:56.594 "name": "BaseBdev3", 00:08:56.594 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:56.594 "is_configured": false, 00:08:56.594 "data_offset": 0, 00:08:56.594 "data_size": 0 00:08:56.594 } 00:08:56.594 ] 00:08:56.594 }' 00:08:56.594 13:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:56.594 13:18:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:57.164 13:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:57.164 13:18:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.164 13:18:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:57.164 [2024-11-17 13:18:46.249026] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:57.164 [2024-11-17 13:18:46.249382] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:57.164 [2024-11-17 13:18:46.249448] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:57.164 [2024-11-17 13:18:46.249770] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:57.164 BaseBdev3 00:08:57.164 [2024-11-17 13:18:46.250006] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:57.164 [2024-11-17 13:18:46.250045] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:57.164 [2024-11-17 13:18:46.250206] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:57.164 13:18:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.164 13:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:57.164 13:18:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:57.164 13:18:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:57.164 13:18:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:57.164 13:18:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:57.164 13:18:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:57.164 13:18:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:57.164 13:18:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.164 13:18:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:57.164 13:18:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.164 13:18:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:57.164 13:18:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.164 13:18:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:57.164 [ 00:08:57.164 { 00:08:57.164 "name": "BaseBdev3", 00:08:57.164 "aliases": [ 00:08:57.164 "6abc7da6-74d7-4284-bd79-5bb12f85c584" 00:08:57.164 ], 00:08:57.164 "product_name": "Malloc disk", 00:08:57.164 "block_size": 512, 00:08:57.164 "num_blocks": 65536, 00:08:57.164 "uuid": "6abc7da6-74d7-4284-bd79-5bb12f85c584", 00:08:57.164 "assigned_rate_limits": { 00:08:57.164 "rw_ios_per_sec": 0, 00:08:57.164 "rw_mbytes_per_sec": 0, 00:08:57.164 "r_mbytes_per_sec": 0, 00:08:57.164 "w_mbytes_per_sec": 0 00:08:57.164 }, 00:08:57.164 "claimed": true, 00:08:57.164 "claim_type": "exclusive_write", 00:08:57.164 "zoned": false, 00:08:57.164 "supported_io_types": { 00:08:57.164 "read": true, 00:08:57.164 "write": true, 00:08:57.164 "unmap": true, 00:08:57.164 "flush": true, 00:08:57.164 "reset": true, 00:08:57.164 "nvme_admin": false, 00:08:57.164 "nvme_io": false, 00:08:57.164 "nvme_io_md": false, 00:08:57.164 "write_zeroes": true, 00:08:57.164 "zcopy": true, 00:08:57.164 "get_zone_info": false, 00:08:57.164 "zone_management": false, 00:08:57.164 "zone_append": false, 00:08:57.164 "compare": false, 00:08:57.164 "compare_and_write": false, 00:08:57.164 "abort": true, 00:08:57.164 "seek_hole": false, 00:08:57.164 "seek_data": false, 00:08:57.164 "copy": true, 00:08:57.164 "nvme_iov_md": false 00:08:57.164 }, 00:08:57.164 "memory_domains": [ 00:08:57.164 { 00:08:57.164 "dma_device_id": "system", 00:08:57.164 "dma_device_type": 1 00:08:57.164 }, 00:08:57.164 { 00:08:57.164 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:57.164 "dma_device_type": 2 00:08:57.164 } 00:08:57.164 ], 00:08:57.164 "driver_specific": {} 00:08:57.164 } 00:08:57.164 ] 00:08:57.164 13:18:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.164 13:18:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:57.164 13:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:57.164 13:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:57.164 13:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:08:57.164 13:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:57.164 13:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:57.164 13:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:57.164 13:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:57.165 13:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:57.165 13:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:57.165 13:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:57.165 13:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:57.165 13:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:57.165 13:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:57.165 13:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:57.165 13:18:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.165 13:18:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:57.165 13:18:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.165 13:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:57.165 "name": "Existed_Raid", 00:08:57.165 "uuid": "62c5b8d5-9e0e-41ae-95ea-783a9ab4343e", 00:08:57.165 "strip_size_kb": 64, 00:08:57.165 "state": "online", 00:08:57.165 "raid_level": "concat", 00:08:57.165 "superblock": true, 00:08:57.165 "num_base_bdevs": 3, 00:08:57.165 "num_base_bdevs_discovered": 3, 00:08:57.165 "num_base_bdevs_operational": 3, 00:08:57.165 "base_bdevs_list": [ 00:08:57.165 { 00:08:57.165 "name": "BaseBdev1", 00:08:57.165 "uuid": "b1d3592c-988e-4354-bc5c-c92f46d7d7fc", 00:08:57.165 "is_configured": true, 00:08:57.165 "data_offset": 2048, 00:08:57.165 "data_size": 63488 00:08:57.165 }, 00:08:57.165 { 00:08:57.165 "name": "BaseBdev2", 00:08:57.165 "uuid": "5e2c2a2d-ddc1-4b68-ace0-f8e08f3ea84e", 00:08:57.165 "is_configured": true, 00:08:57.165 "data_offset": 2048, 00:08:57.165 "data_size": 63488 00:08:57.165 }, 00:08:57.165 { 00:08:57.165 "name": "BaseBdev3", 00:08:57.165 "uuid": "6abc7da6-74d7-4284-bd79-5bb12f85c584", 00:08:57.165 "is_configured": true, 00:08:57.165 "data_offset": 2048, 00:08:57.165 "data_size": 63488 00:08:57.165 } 00:08:57.165 ] 00:08:57.165 }' 00:08:57.165 13:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:57.165 13:18:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:57.735 13:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:57.735 13:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:57.735 13:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:57.735 13:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:57.735 13:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:57.735 13:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:57.735 13:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:57.735 13:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:57.735 13:18:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.735 13:18:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:57.735 [2024-11-17 13:18:46.764584] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:57.735 13:18:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.735 13:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:57.735 "name": "Existed_Raid", 00:08:57.735 "aliases": [ 00:08:57.735 "62c5b8d5-9e0e-41ae-95ea-783a9ab4343e" 00:08:57.735 ], 00:08:57.735 "product_name": "Raid Volume", 00:08:57.735 "block_size": 512, 00:08:57.735 "num_blocks": 190464, 00:08:57.735 "uuid": "62c5b8d5-9e0e-41ae-95ea-783a9ab4343e", 00:08:57.735 "assigned_rate_limits": { 00:08:57.735 "rw_ios_per_sec": 0, 00:08:57.735 "rw_mbytes_per_sec": 0, 00:08:57.735 "r_mbytes_per_sec": 0, 00:08:57.735 "w_mbytes_per_sec": 0 00:08:57.735 }, 00:08:57.735 "claimed": false, 00:08:57.735 "zoned": false, 00:08:57.735 "supported_io_types": { 00:08:57.735 "read": true, 00:08:57.735 "write": true, 00:08:57.735 "unmap": true, 00:08:57.735 "flush": true, 00:08:57.735 "reset": true, 00:08:57.735 "nvme_admin": false, 00:08:57.735 "nvme_io": false, 00:08:57.735 "nvme_io_md": false, 00:08:57.735 "write_zeroes": true, 00:08:57.735 "zcopy": false, 00:08:57.735 "get_zone_info": false, 00:08:57.735 "zone_management": false, 00:08:57.735 "zone_append": false, 00:08:57.735 "compare": false, 00:08:57.735 "compare_and_write": false, 00:08:57.735 "abort": false, 00:08:57.735 "seek_hole": false, 00:08:57.735 "seek_data": false, 00:08:57.735 "copy": false, 00:08:57.735 "nvme_iov_md": false 00:08:57.735 }, 00:08:57.735 "memory_domains": [ 00:08:57.735 { 00:08:57.735 "dma_device_id": "system", 00:08:57.735 "dma_device_type": 1 00:08:57.735 }, 00:08:57.735 { 00:08:57.735 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:57.735 "dma_device_type": 2 00:08:57.735 }, 00:08:57.735 { 00:08:57.735 "dma_device_id": "system", 00:08:57.735 "dma_device_type": 1 00:08:57.735 }, 00:08:57.735 { 00:08:57.735 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:57.735 "dma_device_type": 2 00:08:57.735 }, 00:08:57.735 { 00:08:57.735 "dma_device_id": "system", 00:08:57.735 "dma_device_type": 1 00:08:57.735 }, 00:08:57.735 { 00:08:57.735 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:57.735 "dma_device_type": 2 00:08:57.735 } 00:08:57.735 ], 00:08:57.735 "driver_specific": { 00:08:57.735 "raid": { 00:08:57.735 "uuid": "62c5b8d5-9e0e-41ae-95ea-783a9ab4343e", 00:08:57.735 "strip_size_kb": 64, 00:08:57.735 "state": "online", 00:08:57.735 "raid_level": "concat", 00:08:57.735 "superblock": true, 00:08:57.735 "num_base_bdevs": 3, 00:08:57.735 "num_base_bdevs_discovered": 3, 00:08:57.735 "num_base_bdevs_operational": 3, 00:08:57.735 "base_bdevs_list": [ 00:08:57.735 { 00:08:57.735 "name": "BaseBdev1", 00:08:57.735 "uuid": "b1d3592c-988e-4354-bc5c-c92f46d7d7fc", 00:08:57.735 "is_configured": true, 00:08:57.735 "data_offset": 2048, 00:08:57.735 "data_size": 63488 00:08:57.735 }, 00:08:57.735 { 00:08:57.735 "name": "BaseBdev2", 00:08:57.735 "uuid": "5e2c2a2d-ddc1-4b68-ace0-f8e08f3ea84e", 00:08:57.735 "is_configured": true, 00:08:57.735 "data_offset": 2048, 00:08:57.735 "data_size": 63488 00:08:57.735 }, 00:08:57.735 { 00:08:57.735 "name": "BaseBdev3", 00:08:57.735 "uuid": "6abc7da6-74d7-4284-bd79-5bb12f85c584", 00:08:57.735 "is_configured": true, 00:08:57.735 "data_offset": 2048, 00:08:57.735 "data_size": 63488 00:08:57.735 } 00:08:57.735 ] 00:08:57.735 } 00:08:57.735 } 00:08:57.735 }' 00:08:57.735 13:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:57.735 13:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:57.735 BaseBdev2 00:08:57.735 BaseBdev3' 00:08:57.735 13:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:57.735 13:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:57.735 13:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:57.735 13:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:57.735 13:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:57.735 13:18:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.735 13:18:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:57.735 13:18:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.735 13:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:57.735 13:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:57.735 13:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:57.735 13:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:57.735 13:18:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.735 13:18:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:57.735 13:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:57.735 13:18:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.996 13:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:57.996 13:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:57.996 13:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:57.996 13:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:57.996 13:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:57.996 13:18:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.996 13:18:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:57.996 13:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.996 13:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:57.996 13:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:57.996 13:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:57.996 13:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.996 13:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:57.996 [2024-11-17 13:18:47.035842] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:57.996 [2024-11-17 13:18:47.035913] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:57.996 [2024-11-17 13:18:47.035988] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:57.996 13:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.996 13:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:57.996 13:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:08:57.996 13:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:57.996 13:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:08:57.996 13:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:57.996 13:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:08:57.996 13:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:57.996 13:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:57.996 13:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:57.996 13:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:57.996 13:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:57.996 13:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:57.996 13:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:57.996 13:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:57.996 13:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:57.996 13:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:57.996 13:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:57.996 13:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.996 13:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:57.996 13:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.996 13:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:57.996 "name": "Existed_Raid", 00:08:57.996 "uuid": "62c5b8d5-9e0e-41ae-95ea-783a9ab4343e", 00:08:57.996 "strip_size_kb": 64, 00:08:57.996 "state": "offline", 00:08:57.996 "raid_level": "concat", 00:08:57.996 "superblock": true, 00:08:57.996 "num_base_bdevs": 3, 00:08:57.996 "num_base_bdevs_discovered": 2, 00:08:57.996 "num_base_bdevs_operational": 2, 00:08:57.996 "base_bdevs_list": [ 00:08:57.996 { 00:08:57.996 "name": null, 00:08:57.996 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:57.996 "is_configured": false, 00:08:57.996 "data_offset": 0, 00:08:57.996 "data_size": 63488 00:08:57.996 }, 00:08:57.996 { 00:08:57.996 "name": "BaseBdev2", 00:08:57.996 "uuid": "5e2c2a2d-ddc1-4b68-ace0-f8e08f3ea84e", 00:08:57.996 "is_configured": true, 00:08:57.996 "data_offset": 2048, 00:08:57.996 "data_size": 63488 00:08:57.996 }, 00:08:57.996 { 00:08:57.996 "name": "BaseBdev3", 00:08:57.996 "uuid": "6abc7da6-74d7-4284-bd79-5bb12f85c584", 00:08:57.996 "is_configured": true, 00:08:57.996 "data_offset": 2048, 00:08:57.996 "data_size": 63488 00:08:57.996 } 00:08:57.996 ] 00:08:57.996 }' 00:08:57.996 13:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:57.996 13:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:58.566 13:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:58.566 13:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:58.566 13:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:58.566 13:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:58.566 13:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.566 13:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:58.566 13:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.566 13:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:58.566 13:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:58.566 13:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:58.566 13:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.566 13:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:58.566 [2024-11-17 13:18:47.601483] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:58.566 13:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.566 13:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:58.566 13:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:58.566 13:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:58.566 13:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:58.566 13:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.566 13:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:58.566 13:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.566 13:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:58.566 13:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:58.566 13:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:58.566 13:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.566 13:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:58.566 [2024-11-17 13:18:47.754528] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:58.566 [2024-11-17 13:18:47.754581] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:58.862 13:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.862 13:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:58.862 13:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:58.862 13:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:58.862 13:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:58.862 13:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.862 13:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:58.862 13:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.862 13:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:58.862 13:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:58.862 13:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:58.862 13:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:58.862 13:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:58.862 13:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:58.862 13:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.862 13:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:58.862 BaseBdev2 00:08:58.862 13:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.862 13:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:58.862 13:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:58.862 13:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:58.862 13:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:58.862 13:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:58.862 13:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:58.862 13:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:58.862 13:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.862 13:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:58.862 13:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.862 13:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:58.862 13:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.862 13:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:58.862 [ 00:08:58.862 { 00:08:58.862 "name": "BaseBdev2", 00:08:58.862 "aliases": [ 00:08:58.862 "9eed46da-6e27-4868-ba8b-2432d0c1cb2a" 00:08:58.862 ], 00:08:58.862 "product_name": "Malloc disk", 00:08:58.862 "block_size": 512, 00:08:58.862 "num_blocks": 65536, 00:08:58.862 "uuid": "9eed46da-6e27-4868-ba8b-2432d0c1cb2a", 00:08:58.862 "assigned_rate_limits": { 00:08:58.862 "rw_ios_per_sec": 0, 00:08:58.862 "rw_mbytes_per_sec": 0, 00:08:58.862 "r_mbytes_per_sec": 0, 00:08:58.862 "w_mbytes_per_sec": 0 00:08:58.862 }, 00:08:58.862 "claimed": false, 00:08:58.862 "zoned": false, 00:08:58.862 "supported_io_types": { 00:08:58.862 "read": true, 00:08:58.862 "write": true, 00:08:58.862 "unmap": true, 00:08:58.862 "flush": true, 00:08:58.862 "reset": true, 00:08:58.862 "nvme_admin": false, 00:08:58.862 "nvme_io": false, 00:08:58.862 "nvme_io_md": false, 00:08:58.862 "write_zeroes": true, 00:08:58.862 "zcopy": true, 00:08:58.862 "get_zone_info": false, 00:08:58.862 "zone_management": false, 00:08:58.862 "zone_append": false, 00:08:58.862 "compare": false, 00:08:58.862 "compare_and_write": false, 00:08:58.862 "abort": true, 00:08:58.862 "seek_hole": false, 00:08:58.862 "seek_data": false, 00:08:58.862 "copy": true, 00:08:58.862 "nvme_iov_md": false 00:08:58.862 }, 00:08:58.862 "memory_domains": [ 00:08:58.862 { 00:08:58.862 "dma_device_id": "system", 00:08:58.862 "dma_device_type": 1 00:08:58.862 }, 00:08:58.862 { 00:08:58.862 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:58.862 "dma_device_type": 2 00:08:58.862 } 00:08:58.862 ], 00:08:58.862 "driver_specific": {} 00:08:58.862 } 00:08:58.862 ] 00:08:58.862 13:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.862 13:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:58.862 13:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:58.862 13:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:58.862 13:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:58.862 13:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.862 13:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:58.862 BaseBdev3 00:08:58.862 13:18:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.862 13:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:58.862 13:18:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:58.862 13:18:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:58.862 13:18:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:58.862 13:18:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:58.862 13:18:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:58.862 13:18:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:58.862 13:18:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.862 13:18:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:58.862 13:18:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.862 13:18:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:58.862 13:18:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.862 13:18:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:58.862 [ 00:08:58.862 { 00:08:58.862 "name": "BaseBdev3", 00:08:58.862 "aliases": [ 00:08:58.862 "1c31e8cd-4500-45ae-b271-4eeed014ef08" 00:08:58.862 ], 00:08:58.862 "product_name": "Malloc disk", 00:08:58.862 "block_size": 512, 00:08:58.862 "num_blocks": 65536, 00:08:58.862 "uuid": "1c31e8cd-4500-45ae-b271-4eeed014ef08", 00:08:58.862 "assigned_rate_limits": { 00:08:58.862 "rw_ios_per_sec": 0, 00:08:58.862 "rw_mbytes_per_sec": 0, 00:08:58.862 "r_mbytes_per_sec": 0, 00:08:58.862 "w_mbytes_per_sec": 0 00:08:58.862 }, 00:08:58.862 "claimed": false, 00:08:58.862 "zoned": false, 00:08:58.862 "supported_io_types": { 00:08:58.862 "read": true, 00:08:58.862 "write": true, 00:08:58.862 "unmap": true, 00:08:58.862 "flush": true, 00:08:58.862 "reset": true, 00:08:58.862 "nvme_admin": false, 00:08:58.862 "nvme_io": false, 00:08:58.863 "nvme_io_md": false, 00:08:58.863 "write_zeroes": true, 00:08:58.863 "zcopy": true, 00:08:58.863 "get_zone_info": false, 00:08:58.863 "zone_management": false, 00:08:58.863 "zone_append": false, 00:08:58.863 "compare": false, 00:08:58.863 "compare_and_write": false, 00:08:58.863 "abort": true, 00:08:58.863 "seek_hole": false, 00:08:58.863 "seek_data": false, 00:08:58.863 "copy": true, 00:08:58.863 "nvme_iov_md": false 00:08:58.863 }, 00:08:58.863 "memory_domains": [ 00:08:58.863 { 00:08:58.863 "dma_device_id": "system", 00:08:58.863 "dma_device_type": 1 00:08:58.863 }, 00:08:58.863 { 00:08:58.863 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:58.863 "dma_device_type": 2 00:08:58.863 } 00:08:58.863 ], 00:08:58.863 "driver_specific": {} 00:08:58.863 } 00:08:58.863 ] 00:08:58.863 13:18:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.863 13:18:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:58.863 13:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:58.863 13:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:58.863 13:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:58.863 13:18:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.863 13:18:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:59.175 [2024-11-17 13:18:48.061380] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:59.175 [2024-11-17 13:18:48.061476] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:59.175 [2024-11-17 13:18:48.061526] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:59.175 [2024-11-17 13:18:48.063578] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:59.175 13:18:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.175 13:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:59.176 13:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:59.176 13:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:59.176 13:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:59.176 13:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:59.176 13:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:59.176 13:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:59.176 13:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:59.176 13:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:59.176 13:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:59.176 13:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:59.176 13:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:59.176 13:18:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.176 13:18:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:59.176 13:18:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.176 13:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:59.176 "name": "Existed_Raid", 00:08:59.176 "uuid": "9d7a1f8d-a15f-4dfc-a54a-69438f42f2e9", 00:08:59.176 "strip_size_kb": 64, 00:08:59.176 "state": "configuring", 00:08:59.176 "raid_level": "concat", 00:08:59.176 "superblock": true, 00:08:59.176 "num_base_bdevs": 3, 00:08:59.176 "num_base_bdevs_discovered": 2, 00:08:59.176 "num_base_bdevs_operational": 3, 00:08:59.176 "base_bdevs_list": [ 00:08:59.176 { 00:08:59.176 "name": "BaseBdev1", 00:08:59.176 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:59.176 "is_configured": false, 00:08:59.176 "data_offset": 0, 00:08:59.176 "data_size": 0 00:08:59.176 }, 00:08:59.176 { 00:08:59.176 "name": "BaseBdev2", 00:08:59.176 "uuid": "9eed46da-6e27-4868-ba8b-2432d0c1cb2a", 00:08:59.176 "is_configured": true, 00:08:59.176 "data_offset": 2048, 00:08:59.176 "data_size": 63488 00:08:59.176 }, 00:08:59.176 { 00:08:59.176 "name": "BaseBdev3", 00:08:59.176 "uuid": "1c31e8cd-4500-45ae-b271-4eeed014ef08", 00:08:59.176 "is_configured": true, 00:08:59.176 "data_offset": 2048, 00:08:59.176 "data_size": 63488 00:08:59.176 } 00:08:59.176 ] 00:08:59.176 }' 00:08:59.176 13:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:59.176 13:18:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:59.436 13:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:59.436 13:18:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.436 13:18:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:59.436 [2024-11-17 13:18:48.484678] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:59.436 13:18:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.436 13:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:59.436 13:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:59.436 13:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:59.436 13:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:59.436 13:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:59.436 13:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:59.436 13:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:59.436 13:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:59.436 13:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:59.436 13:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:59.436 13:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:59.436 13:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:59.436 13:18:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.436 13:18:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:59.436 13:18:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.436 13:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:59.436 "name": "Existed_Raid", 00:08:59.436 "uuid": "9d7a1f8d-a15f-4dfc-a54a-69438f42f2e9", 00:08:59.437 "strip_size_kb": 64, 00:08:59.437 "state": "configuring", 00:08:59.437 "raid_level": "concat", 00:08:59.437 "superblock": true, 00:08:59.437 "num_base_bdevs": 3, 00:08:59.437 "num_base_bdevs_discovered": 1, 00:08:59.437 "num_base_bdevs_operational": 3, 00:08:59.437 "base_bdevs_list": [ 00:08:59.437 { 00:08:59.437 "name": "BaseBdev1", 00:08:59.437 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:59.437 "is_configured": false, 00:08:59.437 "data_offset": 0, 00:08:59.437 "data_size": 0 00:08:59.437 }, 00:08:59.437 { 00:08:59.437 "name": null, 00:08:59.437 "uuid": "9eed46da-6e27-4868-ba8b-2432d0c1cb2a", 00:08:59.437 "is_configured": false, 00:08:59.437 "data_offset": 0, 00:08:59.437 "data_size": 63488 00:08:59.437 }, 00:08:59.437 { 00:08:59.437 "name": "BaseBdev3", 00:08:59.437 "uuid": "1c31e8cd-4500-45ae-b271-4eeed014ef08", 00:08:59.437 "is_configured": true, 00:08:59.437 "data_offset": 2048, 00:08:59.437 "data_size": 63488 00:08:59.437 } 00:08:59.437 ] 00:08:59.437 }' 00:08:59.437 13:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:59.437 13:18:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:59.697 13:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:59.697 13:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:59.697 13:18:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.697 13:18:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:59.957 13:18:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.957 13:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:59.957 13:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:59.957 13:18:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.957 13:18:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:59.957 [2024-11-17 13:18:49.001067] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:59.957 BaseBdev1 00:08:59.957 13:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.957 13:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:59.957 13:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:59.957 13:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:59.957 13:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:59.957 13:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:59.957 13:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:59.957 13:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:59.957 13:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.957 13:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:59.957 13:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.957 13:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:59.957 13:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.957 13:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:59.957 [ 00:08:59.957 { 00:08:59.957 "name": "BaseBdev1", 00:08:59.957 "aliases": [ 00:08:59.957 "fb836f94-cbcb-4f4c-a2dd-305bdc74682c" 00:08:59.957 ], 00:08:59.957 "product_name": "Malloc disk", 00:08:59.957 "block_size": 512, 00:08:59.957 "num_blocks": 65536, 00:08:59.957 "uuid": "fb836f94-cbcb-4f4c-a2dd-305bdc74682c", 00:08:59.957 "assigned_rate_limits": { 00:08:59.957 "rw_ios_per_sec": 0, 00:08:59.957 "rw_mbytes_per_sec": 0, 00:08:59.957 "r_mbytes_per_sec": 0, 00:08:59.957 "w_mbytes_per_sec": 0 00:08:59.957 }, 00:08:59.957 "claimed": true, 00:08:59.957 "claim_type": "exclusive_write", 00:08:59.958 "zoned": false, 00:08:59.958 "supported_io_types": { 00:08:59.958 "read": true, 00:08:59.958 "write": true, 00:08:59.958 "unmap": true, 00:08:59.958 "flush": true, 00:08:59.958 "reset": true, 00:08:59.958 "nvme_admin": false, 00:08:59.958 "nvme_io": false, 00:08:59.958 "nvme_io_md": false, 00:08:59.958 "write_zeroes": true, 00:08:59.958 "zcopy": true, 00:08:59.958 "get_zone_info": false, 00:08:59.958 "zone_management": false, 00:08:59.958 "zone_append": false, 00:08:59.958 "compare": false, 00:08:59.958 "compare_and_write": false, 00:08:59.958 "abort": true, 00:08:59.958 "seek_hole": false, 00:08:59.958 "seek_data": false, 00:08:59.958 "copy": true, 00:08:59.958 "nvme_iov_md": false 00:08:59.958 }, 00:08:59.958 "memory_domains": [ 00:08:59.958 { 00:08:59.958 "dma_device_id": "system", 00:08:59.958 "dma_device_type": 1 00:08:59.958 }, 00:08:59.958 { 00:08:59.958 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:59.958 "dma_device_type": 2 00:08:59.958 } 00:08:59.958 ], 00:08:59.958 "driver_specific": {} 00:08:59.958 } 00:08:59.958 ] 00:08:59.958 13:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.958 13:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:59.958 13:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:59.958 13:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:59.958 13:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:59.958 13:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:59.958 13:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:59.958 13:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:59.958 13:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:59.958 13:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:59.958 13:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:59.958 13:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:59.958 13:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:59.958 13:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:59.958 13:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.958 13:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:59.958 13:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.958 13:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:59.958 "name": "Existed_Raid", 00:08:59.958 "uuid": "9d7a1f8d-a15f-4dfc-a54a-69438f42f2e9", 00:08:59.958 "strip_size_kb": 64, 00:08:59.958 "state": "configuring", 00:08:59.958 "raid_level": "concat", 00:08:59.958 "superblock": true, 00:08:59.958 "num_base_bdevs": 3, 00:08:59.958 "num_base_bdevs_discovered": 2, 00:08:59.958 "num_base_bdevs_operational": 3, 00:08:59.958 "base_bdevs_list": [ 00:08:59.958 { 00:08:59.958 "name": "BaseBdev1", 00:08:59.958 "uuid": "fb836f94-cbcb-4f4c-a2dd-305bdc74682c", 00:08:59.958 "is_configured": true, 00:08:59.958 "data_offset": 2048, 00:08:59.958 "data_size": 63488 00:08:59.958 }, 00:08:59.958 { 00:08:59.958 "name": null, 00:08:59.958 "uuid": "9eed46da-6e27-4868-ba8b-2432d0c1cb2a", 00:08:59.958 "is_configured": false, 00:08:59.958 "data_offset": 0, 00:08:59.958 "data_size": 63488 00:08:59.958 }, 00:08:59.958 { 00:08:59.958 "name": "BaseBdev3", 00:08:59.958 "uuid": "1c31e8cd-4500-45ae-b271-4eeed014ef08", 00:08:59.958 "is_configured": true, 00:08:59.958 "data_offset": 2048, 00:08:59.958 "data_size": 63488 00:08:59.958 } 00:08:59.958 ] 00:08:59.958 }' 00:08:59.958 13:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:59.958 13:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.216 13:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:00.216 13:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.216 13:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.216 13:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:00.216 13:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.216 13:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:00.216 13:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:00.216 13:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.216 13:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.475 [2024-11-17 13:18:49.440389] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:00.475 13:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.475 13:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:00.475 13:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:00.475 13:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:00.475 13:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:00.475 13:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:00.475 13:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:00.475 13:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:00.475 13:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:00.475 13:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:00.475 13:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:00.475 13:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:00.475 13:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.475 13:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.475 13:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:00.475 13:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.475 13:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:00.475 "name": "Existed_Raid", 00:09:00.475 "uuid": "9d7a1f8d-a15f-4dfc-a54a-69438f42f2e9", 00:09:00.475 "strip_size_kb": 64, 00:09:00.475 "state": "configuring", 00:09:00.475 "raid_level": "concat", 00:09:00.475 "superblock": true, 00:09:00.475 "num_base_bdevs": 3, 00:09:00.475 "num_base_bdevs_discovered": 1, 00:09:00.475 "num_base_bdevs_operational": 3, 00:09:00.475 "base_bdevs_list": [ 00:09:00.475 { 00:09:00.475 "name": "BaseBdev1", 00:09:00.475 "uuid": "fb836f94-cbcb-4f4c-a2dd-305bdc74682c", 00:09:00.475 "is_configured": true, 00:09:00.475 "data_offset": 2048, 00:09:00.475 "data_size": 63488 00:09:00.475 }, 00:09:00.475 { 00:09:00.475 "name": null, 00:09:00.475 "uuid": "9eed46da-6e27-4868-ba8b-2432d0c1cb2a", 00:09:00.475 "is_configured": false, 00:09:00.475 "data_offset": 0, 00:09:00.475 "data_size": 63488 00:09:00.475 }, 00:09:00.475 { 00:09:00.475 "name": null, 00:09:00.475 "uuid": "1c31e8cd-4500-45ae-b271-4eeed014ef08", 00:09:00.475 "is_configured": false, 00:09:00.475 "data_offset": 0, 00:09:00.475 "data_size": 63488 00:09:00.475 } 00:09:00.475 ] 00:09:00.475 }' 00:09:00.475 13:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:00.475 13:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.734 13:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:00.734 13:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:00.734 13:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.734 13:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.734 13:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.734 13:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:00.734 13:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:00.734 13:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.734 13:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.734 [2024-11-17 13:18:49.939594] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:00.734 13:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.734 13:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:00.734 13:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:00.734 13:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:00.734 13:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:00.734 13:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:00.734 13:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:00.734 13:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:00.734 13:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:00.734 13:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:00.734 13:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:00.734 13:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:00.734 13:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.734 13:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.734 13:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:00.993 13:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.993 13:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:00.993 "name": "Existed_Raid", 00:09:00.993 "uuid": "9d7a1f8d-a15f-4dfc-a54a-69438f42f2e9", 00:09:00.993 "strip_size_kb": 64, 00:09:00.993 "state": "configuring", 00:09:00.993 "raid_level": "concat", 00:09:00.993 "superblock": true, 00:09:00.993 "num_base_bdevs": 3, 00:09:00.993 "num_base_bdevs_discovered": 2, 00:09:00.993 "num_base_bdevs_operational": 3, 00:09:00.993 "base_bdevs_list": [ 00:09:00.993 { 00:09:00.993 "name": "BaseBdev1", 00:09:00.993 "uuid": "fb836f94-cbcb-4f4c-a2dd-305bdc74682c", 00:09:00.993 "is_configured": true, 00:09:00.993 "data_offset": 2048, 00:09:00.993 "data_size": 63488 00:09:00.993 }, 00:09:00.993 { 00:09:00.993 "name": null, 00:09:00.993 "uuid": "9eed46da-6e27-4868-ba8b-2432d0c1cb2a", 00:09:00.993 "is_configured": false, 00:09:00.993 "data_offset": 0, 00:09:00.993 "data_size": 63488 00:09:00.993 }, 00:09:00.993 { 00:09:00.993 "name": "BaseBdev3", 00:09:00.993 "uuid": "1c31e8cd-4500-45ae-b271-4eeed014ef08", 00:09:00.993 "is_configured": true, 00:09:00.993 "data_offset": 2048, 00:09:00.993 "data_size": 63488 00:09:00.993 } 00:09:00.993 ] 00:09:00.993 }' 00:09:00.993 13:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:00.993 13:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.252 13:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:01.252 13:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.252 13:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.252 13:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:01.252 13:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.252 13:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:01.252 13:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:01.253 13:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.253 13:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.253 [2024-11-17 13:18:50.414795] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:01.512 13:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.513 13:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:01.513 13:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:01.513 13:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:01.513 13:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:01.513 13:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:01.513 13:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:01.513 13:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:01.513 13:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:01.513 13:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:01.513 13:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:01.513 13:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:01.513 13:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:01.513 13:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.513 13:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.513 13:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.513 13:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:01.513 "name": "Existed_Raid", 00:09:01.513 "uuid": "9d7a1f8d-a15f-4dfc-a54a-69438f42f2e9", 00:09:01.513 "strip_size_kb": 64, 00:09:01.513 "state": "configuring", 00:09:01.513 "raid_level": "concat", 00:09:01.513 "superblock": true, 00:09:01.513 "num_base_bdevs": 3, 00:09:01.513 "num_base_bdevs_discovered": 1, 00:09:01.513 "num_base_bdevs_operational": 3, 00:09:01.513 "base_bdevs_list": [ 00:09:01.513 { 00:09:01.513 "name": null, 00:09:01.513 "uuid": "fb836f94-cbcb-4f4c-a2dd-305bdc74682c", 00:09:01.513 "is_configured": false, 00:09:01.513 "data_offset": 0, 00:09:01.513 "data_size": 63488 00:09:01.513 }, 00:09:01.513 { 00:09:01.513 "name": null, 00:09:01.513 "uuid": "9eed46da-6e27-4868-ba8b-2432d0c1cb2a", 00:09:01.513 "is_configured": false, 00:09:01.513 "data_offset": 0, 00:09:01.513 "data_size": 63488 00:09:01.513 }, 00:09:01.513 { 00:09:01.513 "name": "BaseBdev3", 00:09:01.513 "uuid": "1c31e8cd-4500-45ae-b271-4eeed014ef08", 00:09:01.513 "is_configured": true, 00:09:01.513 "data_offset": 2048, 00:09:01.513 "data_size": 63488 00:09:01.513 } 00:09:01.513 ] 00:09:01.513 }' 00:09:01.513 13:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:01.513 13:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.771 13:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:01.771 13:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:01.771 13:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.771 13:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.771 13:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.771 13:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:01.771 13:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:01.771 13:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.771 13:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.771 [2024-11-17 13:18:50.962498] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:01.771 13:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.771 13:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:01.771 13:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:01.771 13:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:01.771 13:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:01.771 13:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:01.771 13:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:01.771 13:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:01.771 13:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:01.771 13:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:01.771 13:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:01.771 13:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:01.771 13:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:01.771 13:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.771 13:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.771 13:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.030 13:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:02.030 "name": "Existed_Raid", 00:09:02.030 "uuid": "9d7a1f8d-a15f-4dfc-a54a-69438f42f2e9", 00:09:02.030 "strip_size_kb": 64, 00:09:02.030 "state": "configuring", 00:09:02.030 "raid_level": "concat", 00:09:02.030 "superblock": true, 00:09:02.030 "num_base_bdevs": 3, 00:09:02.030 "num_base_bdevs_discovered": 2, 00:09:02.030 "num_base_bdevs_operational": 3, 00:09:02.030 "base_bdevs_list": [ 00:09:02.030 { 00:09:02.030 "name": null, 00:09:02.030 "uuid": "fb836f94-cbcb-4f4c-a2dd-305bdc74682c", 00:09:02.030 "is_configured": false, 00:09:02.030 "data_offset": 0, 00:09:02.030 "data_size": 63488 00:09:02.030 }, 00:09:02.030 { 00:09:02.030 "name": "BaseBdev2", 00:09:02.030 "uuid": "9eed46da-6e27-4868-ba8b-2432d0c1cb2a", 00:09:02.031 "is_configured": true, 00:09:02.031 "data_offset": 2048, 00:09:02.031 "data_size": 63488 00:09:02.031 }, 00:09:02.031 { 00:09:02.031 "name": "BaseBdev3", 00:09:02.031 "uuid": "1c31e8cd-4500-45ae-b271-4eeed014ef08", 00:09:02.031 "is_configured": true, 00:09:02.031 "data_offset": 2048, 00:09:02.031 "data_size": 63488 00:09:02.031 } 00:09:02.031 ] 00:09:02.031 }' 00:09:02.031 13:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:02.031 13:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.290 13:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:02.290 13:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:02.290 13:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.290 13:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.290 13:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.290 13:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:02.290 13:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:02.290 13:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:02.290 13:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.290 13:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.290 13:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.290 13:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u fb836f94-cbcb-4f4c-a2dd-305bdc74682c 00:09:02.290 13:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.290 13:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.550 [2024-11-17 13:18:51.550132] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:02.550 [2024-11-17 13:18:51.550362] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:02.550 [2024-11-17 13:18:51.550379] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:02.550 [2024-11-17 13:18:51.550647] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:02.550 [2024-11-17 13:18:51.550834] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:02.550 [2024-11-17 13:18:51.550845] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:09:02.550 [2024-11-17 13:18:51.550974] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:02.550 NewBaseBdev 00:09:02.550 13:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.550 13:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:02.550 13:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:09:02.550 13:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:02.550 13:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:02.550 13:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:02.550 13:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:02.550 13:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:02.550 13:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.550 13:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.550 13:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.550 13:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:02.550 13:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.550 13:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.550 [ 00:09:02.550 { 00:09:02.550 "name": "NewBaseBdev", 00:09:02.550 "aliases": [ 00:09:02.550 "fb836f94-cbcb-4f4c-a2dd-305bdc74682c" 00:09:02.550 ], 00:09:02.550 "product_name": "Malloc disk", 00:09:02.550 "block_size": 512, 00:09:02.550 "num_blocks": 65536, 00:09:02.550 "uuid": "fb836f94-cbcb-4f4c-a2dd-305bdc74682c", 00:09:02.550 "assigned_rate_limits": { 00:09:02.550 "rw_ios_per_sec": 0, 00:09:02.550 "rw_mbytes_per_sec": 0, 00:09:02.550 "r_mbytes_per_sec": 0, 00:09:02.550 "w_mbytes_per_sec": 0 00:09:02.550 }, 00:09:02.550 "claimed": true, 00:09:02.550 "claim_type": "exclusive_write", 00:09:02.550 "zoned": false, 00:09:02.550 "supported_io_types": { 00:09:02.550 "read": true, 00:09:02.550 "write": true, 00:09:02.550 "unmap": true, 00:09:02.550 "flush": true, 00:09:02.550 "reset": true, 00:09:02.550 "nvme_admin": false, 00:09:02.550 "nvme_io": false, 00:09:02.550 "nvme_io_md": false, 00:09:02.550 "write_zeroes": true, 00:09:02.550 "zcopy": true, 00:09:02.550 "get_zone_info": false, 00:09:02.550 "zone_management": false, 00:09:02.550 "zone_append": false, 00:09:02.550 "compare": false, 00:09:02.550 "compare_and_write": false, 00:09:02.550 "abort": true, 00:09:02.550 "seek_hole": false, 00:09:02.550 "seek_data": false, 00:09:02.550 "copy": true, 00:09:02.550 "nvme_iov_md": false 00:09:02.550 }, 00:09:02.550 "memory_domains": [ 00:09:02.550 { 00:09:02.550 "dma_device_id": "system", 00:09:02.550 "dma_device_type": 1 00:09:02.550 }, 00:09:02.550 { 00:09:02.550 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:02.550 "dma_device_type": 2 00:09:02.550 } 00:09:02.550 ], 00:09:02.550 "driver_specific": {} 00:09:02.550 } 00:09:02.550 ] 00:09:02.550 13:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.550 13:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:02.550 13:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:09:02.550 13:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:02.550 13:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:02.550 13:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:02.550 13:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:02.550 13:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:02.550 13:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:02.550 13:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:02.550 13:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:02.550 13:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:02.550 13:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:02.550 13:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:02.550 13:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.550 13:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.550 13:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.550 13:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:02.550 "name": "Existed_Raid", 00:09:02.550 "uuid": "9d7a1f8d-a15f-4dfc-a54a-69438f42f2e9", 00:09:02.550 "strip_size_kb": 64, 00:09:02.550 "state": "online", 00:09:02.550 "raid_level": "concat", 00:09:02.550 "superblock": true, 00:09:02.550 "num_base_bdevs": 3, 00:09:02.550 "num_base_bdevs_discovered": 3, 00:09:02.550 "num_base_bdevs_operational": 3, 00:09:02.550 "base_bdevs_list": [ 00:09:02.550 { 00:09:02.550 "name": "NewBaseBdev", 00:09:02.550 "uuid": "fb836f94-cbcb-4f4c-a2dd-305bdc74682c", 00:09:02.550 "is_configured": true, 00:09:02.550 "data_offset": 2048, 00:09:02.550 "data_size": 63488 00:09:02.550 }, 00:09:02.550 { 00:09:02.550 "name": "BaseBdev2", 00:09:02.550 "uuid": "9eed46da-6e27-4868-ba8b-2432d0c1cb2a", 00:09:02.550 "is_configured": true, 00:09:02.550 "data_offset": 2048, 00:09:02.550 "data_size": 63488 00:09:02.550 }, 00:09:02.550 { 00:09:02.550 "name": "BaseBdev3", 00:09:02.550 "uuid": "1c31e8cd-4500-45ae-b271-4eeed014ef08", 00:09:02.550 "is_configured": true, 00:09:02.550 "data_offset": 2048, 00:09:02.550 "data_size": 63488 00:09:02.550 } 00:09:02.550 ] 00:09:02.550 }' 00:09:02.550 13:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:02.550 13:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.810 13:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:02.810 13:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:02.810 13:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:02.810 13:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:02.810 13:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:02.810 13:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:02.810 13:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:03.070 13:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:03.070 13:18:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.070 13:18:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.071 [2024-11-17 13:18:52.041697] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:03.071 13:18:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.071 13:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:03.071 "name": "Existed_Raid", 00:09:03.071 "aliases": [ 00:09:03.071 "9d7a1f8d-a15f-4dfc-a54a-69438f42f2e9" 00:09:03.071 ], 00:09:03.071 "product_name": "Raid Volume", 00:09:03.071 "block_size": 512, 00:09:03.071 "num_blocks": 190464, 00:09:03.071 "uuid": "9d7a1f8d-a15f-4dfc-a54a-69438f42f2e9", 00:09:03.071 "assigned_rate_limits": { 00:09:03.071 "rw_ios_per_sec": 0, 00:09:03.071 "rw_mbytes_per_sec": 0, 00:09:03.071 "r_mbytes_per_sec": 0, 00:09:03.071 "w_mbytes_per_sec": 0 00:09:03.071 }, 00:09:03.071 "claimed": false, 00:09:03.071 "zoned": false, 00:09:03.071 "supported_io_types": { 00:09:03.071 "read": true, 00:09:03.071 "write": true, 00:09:03.071 "unmap": true, 00:09:03.071 "flush": true, 00:09:03.071 "reset": true, 00:09:03.071 "nvme_admin": false, 00:09:03.071 "nvme_io": false, 00:09:03.071 "nvme_io_md": false, 00:09:03.071 "write_zeroes": true, 00:09:03.071 "zcopy": false, 00:09:03.071 "get_zone_info": false, 00:09:03.071 "zone_management": false, 00:09:03.071 "zone_append": false, 00:09:03.071 "compare": false, 00:09:03.071 "compare_and_write": false, 00:09:03.071 "abort": false, 00:09:03.071 "seek_hole": false, 00:09:03.071 "seek_data": false, 00:09:03.071 "copy": false, 00:09:03.071 "nvme_iov_md": false 00:09:03.071 }, 00:09:03.071 "memory_domains": [ 00:09:03.071 { 00:09:03.071 "dma_device_id": "system", 00:09:03.071 "dma_device_type": 1 00:09:03.071 }, 00:09:03.071 { 00:09:03.071 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:03.071 "dma_device_type": 2 00:09:03.071 }, 00:09:03.071 { 00:09:03.071 "dma_device_id": "system", 00:09:03.071 "dma_device_type": 1 00:09:03.071 }, 00:09:03.071 { 00:09:03.071 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:03.071 "dma_device_type": 2 00:09:03.071 }, 00:09:03.071 { 00:09:03.071 "dma_device_id": "system", 00:09:03.071 "dma_device_type": 1 00:09:03.071 }, 00:09:03.071 { 00:09:03.071 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:03.071 "dma_device_type": 2 00:09:03.071 } 00:09:03.071 ], 00:09:03.071 "driver_specific": { 00:09:03.071 "raid": { 00:09:03.071 "uuid": "9d7a1f8d-a15f-4dfc-a54a-69438f42f2e9", 00:09:03.071 "strip_size_kb": 64, 00:09:03.071 "state": "online", 00:09:03.071 "raid_level": "concat", 00:09:03.071 "superblock": true, 00:09:03.071 "num_base_bdevs": 3, 00:09:03.071 "num_base_bdevs_discovered": 3, 00:09:03.071 "num_base_bdevs_operational": 3, 00:09:03.071 "base_bdevs_list": [ 00:09:03.071 { 00:09:03.071 "name": "NewBaseBdev", 00:09:03.071 "uuid": "fb836f94-cbcb-4f4c-a2dd-305bdc74682c", 00:09:03.071 "is_configured": true, 00:09:03.071 "data_offset": 2048, 00:09:03.071 "data_size": 63488 00:09:03.071 }, 00:09:03.071 { 00:09:03.071 "name": "BaseBdev2", 00:09:03.071 "uuid": "9eed46da-6e27-4868-ba8b-2432d0c1cb2a", 00:09:03.071 "is_configured": true, 00:09:03.071 "data_offset": 2048, 00:09:03.071 "data_size": 63488 00:09:03.071 }, 00:09:03.071 { 00:09:03.071 "name": "BaseBdev3", 00:09:03.071 "uuid": "1c31e8cd-4500-45ae-b271-4eeed014ef08", 00:09:03.071 "is_configured": true, 00:09:03.071 "data_offset": 2048, 00:09:03.071 "data_size": 63488 00:09:03.071 } 00:09:03.071 ] 00:09:03.071 } 00:09:03.071 } 00:09:03.071 }' 00:09:03.071 13:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:03.071 13:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:03.071 BaseBdev2 00:09:03.071 BaseBdev3' 00:09:03.071 13:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:03.071 13:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:03.071 13:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:03.071 13:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:03.071 13:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:03.071 13:18:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.071 13:18:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.071 13:18:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.071 13:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:03.071 13:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:03.071 13:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:03.071 13:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:03.071 13:18:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.071 13:18:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.071 13:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:03.071 13:18:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.071 13:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:03.071 13:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:03.071 13:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:03.071 13:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:03.071 13:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:03.071 13:18:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.071 13:18:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.331 13:18:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.331 13:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:03.331 13:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:03.331 13:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:03.331 13:18:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.331 13:18:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.331 [2024-11-17 13:18:52.336881] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:03.331 [2024-11-17 13:18:52.336949] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:03.331 [2024-11-17 13:18:52.337058] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:03.331 [2024-11-17 13:18:52.337148] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:03.331 [2024-11-17 13:18:52.337213] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:09:03.331 13:18:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.331 13:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 66164 00:09:03.331 13:18:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 66164 ']' 00:09:03.331 13:18:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 66164 00:09:03.331 13:18:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:09:03.331 13:18:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:03.331 13:18:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66164 00:09:03.331 killing process with pid 66164 00:09:03.331 13:18:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:03.332 13:18:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:03.332 13:18:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66164' 00:09:03.332 13:18:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 66164 00:09:03.332 [2024-11-17 13:18:52.386458] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:03.332 13:18:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 66164 00:09:03.591 [2024-11-17 13:18:52.686702] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:04.978 ************************************ 00:09:04.978 END TEST raid_state_function_test_sb 00:09:04.978 ************************************ 00:09:04.978 13:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:09:04.978 00:09:04.978 real 0m10.380s 00:09:04.978 user 0m16.505s 00:09:04.978 sys 0m1.778s 00:09:04.978 13:18:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:04.978 13:18:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.978 13:18:53 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:09:04.978 13:18:53 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:04.978 13:18:53 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:04.978 13:18:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:04.978 ************************************ 00:09:04.978 START TEST raid_superblock_test 00:09:04.978 ************************************ 00:09:04.978 13:18:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 3 00:09:04.978 13:18:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:09:04.978 13:18:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:09:04.978 13:18:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:09:04.978 13:18:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:09:04.978 13:18:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:09:04.978 13:18:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:09:04.978 13:18:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:09:04.978 13:18:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:09:04.978 13:18:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:09:04.978 13:18:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:09:04.978 13:18:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:09:04.978 13:18:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:09:04.978 13:18:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:09:04.978 13:18:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:09:04.978 13:18:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:09:04.978 13:18:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:09:04.978 13:18:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=66783 00:09:04.978 13:18:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:09:04.978 13:18:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 66783 00:09:04.978 13:18:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 66783 ']' 00:09:04.978 13:18:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:04.978 13:18:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:04.978 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:04.978 13:18:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:04.978 13:18:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:04.978 13:18:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.978 [2024-11-17 13:18:53.970932] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:09:04.978 [2024-11-17 13:18:53.971125] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66783 ] 00:09:04.978 [2024-11-17 13:18:54.126021] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:05.260 [2024-11-17 13:18:54.243878] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:05.260 [2024-11-17 13:18:54.453318] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:05.260 [2024-11-17 13:18:54.453389] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:05.831 13:18:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:05.831 13:18:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:09:05.831 13:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:09:05.831 13:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:05.831 13:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:09:05.831 13:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:09:05.831 13:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:05.831 13:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:05.831 13:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:05.831 13:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:05.831 13:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:09:05.831 13:18:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.831 13:18:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.831 malloc1 00:09:05.831 13:18:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.831 13:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:05.831 13:18:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.831 13:18:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.831 [2024-11-17 13:18:54.872208] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:05.831 [2024-11-17 13:18:54.872357] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:05.831 [2024-11-17 13:18:54.872405] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:05.831 [2024-11-17 13:18:54.872437] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:05.831 [2024-11-17 13:18:54.874726] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:05.831 [2024-11-17 13:18:54.874806] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:05.831 pt1 00:09:05.831 13:18:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.831 13:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:05.831 13:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:05.831 13:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:09:05.831 13:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:09:05.831 13:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:05.831 13:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:05.831 13:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:05.831 13:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:05.831 13:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:09:05.831 13:18:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.831 13:18:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.831 malloc2 00:09:05.831 13:18:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.831 13:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:05.831 13:18:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.831 13:18:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.831 [2024-11-17 13:18:54.931923] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:05.831 [2024-11-17 13:18:54.931983] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:05.831 [2024-11-17 13:18:54.932004] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:05.831 [2024-11-17 13:18:54.932014] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:05.831 [2024-11-17 13:18:54.934388] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:05.831 [2024-11-17 13:18:54.934476] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:05.831 pt2 00:09:05.831 13:18:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.831 13:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:05.831 13:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:05.831 13:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:09:05.831 13:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:09:05.831 13:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:09:05.831 13:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:05.831 13:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:05.831 13:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:05.831 13:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:09:05.831 13:18:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.831 13:18:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.831 malloc3 00:09:05.831 13:18:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.831 13:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:05.831 13:18:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.831 13:18:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.831 [2024-11-17 13:18:55.002589] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:05.831 [2024-11-17 13:18:55.002705] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:05.831 [2024-11-17 13:18:55.002747] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:09:05.831 [2024-11-17 13:18:55.002776] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:05.831 [2024-11-17 13:18:55.005265] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:05.831 [2024-11-17 13:18:55.005343] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:05.831 pt3 00:09:05.831 13:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.831 13:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:05.831 13:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:05.831 13:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:09:05.831 13:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.831 13:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.831 [2024-11-17 13:18:55.014655] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:05.831 [2024-11-17 13:18:55.016718] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:05.831 [2024-11-17 13:18:55.016857] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:05.831 [2024-11-17 13:18:55.017091] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:09:05.832 [2024-11-17 13:18:55.017150] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:05.832 [2024-11-17 13:18:55.017489] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:05.832 [2024-11-17 13:18:55.017727] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:09:05.832 [2024-11-17 13:18:55.017775] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:09:05.832 [2024-11-17 13:18:55.018043] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:05.832 13:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.832 13:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:05.832 13:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:05.832 13:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:05.832 13:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:05.832 13:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:05.832 13:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:05.832 13:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:05.832 13:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:05.832 13:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:05.832 13:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:05.832 13:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:05.832 13:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:05.832 13:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.832 13:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.832 13:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.092 13:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:06.092 "name": "raid_bdev1", 00:09:06.092 "uuid": "6fb5ae95-a36b-4459-98e1-cac748753d68", 00:09:06.092 "strip_size_kb": 64, 00:09:06.092 "state": "online", 00:09:06.092 "raid_level": "concat", 00:09:06.092 "superblock": true, 00:09:06.092 "num_base_bdevs": 3, 00:09:06.092 "num_base_bdevs_discovered": 3, 00:09:06.092 "num_base_bdevs_operational": 3, 00:09:06.092 "base_bdevs_list": [ 00:09:06.092 { 00:09:06.092 "name": "pt1", 00:09:06.092 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:06.092 "is_configured": true, 00:09:06.092 "data_offset": 2048, 00:09:06.092 "data_size": 63488 00:09:06.092 }, 00:09:06.092 { 00:09:06.092 "name": "pt2", 00:09:06.092 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:06.092 "is_configured": true, 00:09:06.092 "data_offset": 2048, 00:09:06.092 "data_size": 63488 00:09:06.092 }, 00:09:06.092 { 00:09:06.092 "name": "pt3", 00:09:06.092 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:06.092 "is_configured": true, 00:09:06.092 "data_offset": 2048, 00:09:06.092 "data_size": 63488 00:09:06.092 } 00:09:06.092 ] 00:09:06.092 }' 00:09:06.092 13:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:06.092 13:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.351 13:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:09:06.351 13:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:06.351 13:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:06.351 13:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:06.351 13:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:06.351 13:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:06.351 13:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:06.351 13:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.351 13:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.351 13:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:06.351 [2024-11-17 13:18:55.450164] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:06.351 13:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.351 13:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:06.351 "name": "raid_bdev1", 00:09:06.351 "aliases": [ 00:09:06.351 "6fb5ae95-a36b-4459-98e1-cac748753d68" 00:09:06.351 ], 00:09:06.351 "product_name": "Raid Volume", 00:09:06.351 "block_size": 512, 00:09:06.351 "num_blocks": 190464, 00:09:06.351 "uuid": "6fb5ae95-a36b-4459-98e1-cac748753d68", 00:09:06.351 "assigned_rate_limits": { 00:09:06.351 "rw_ios_per_sec": 0, 00:09:06.351 "rw_mbytes_per_sec": 0, 00:09:06.351 "r_mbytes_per_sec": 0, 00:09:06.351 "w_mbytes_per_sec": 0 00:09:06.351 }, 00:09:06.351 "claimed": false, 00:09:06.351 "zoned": false, 00:09:06.351 "supported_io_types": { 00:09:06.351 "read": true, 00:09:06.351 "write": true, 00:09:06.351 "unmap": true, 00:09:06.351 "flush": true, 00:09:06.351 "reset": true, 00:09:06.351 "nvme_admin": false, 00:09:06.351 "nvme_io": false, 00:09:06.351 "nvme_io_md": false, 00:09:06.351 "write_zeroes": true, 00:09:06.351 "zcopy": false, 00:09:06.351 "get_zone_info": false, 00:09:06.351 "zone_management": false, 00:09:06.351 "zone_append": false, 00:09:06.351 "compare": false, 00:09:06.351 "compare_and_write": false, 00:09:06.351 "abort": false, 00:09:06.351 "seek_hole": false, 00:09:06.351 "seek_data": false, 00:09:06.351 "copy": false, 00:09:06.351 "nvme_iov_md": false 00:09:06.351 }, 00:09:06.351 "memory_domains": [ 00:09:06.351 { 00:09:06.351 "dma_device_id": "system", 00:09:06.351 "dma_device_type": 1 00:09:06.351 }, 00:09:06.351 { 00:09:06.351 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:06.351 "dma_device_type": 2 00:09:06.351 }, 00:09:06.351 { 00:09:06.351 "dma_device_id": "system", 00:09:06.351 "dma_device_type": 1 00:09:06.351 }, 00:09:06.351 { 00:09:06.351 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:06.351 "dma_device_type": 2 00:09:06.351 }, 00:09:06.351 { 00:09:06.351 "dma_device_id": "system", 00:09:06.351 "dma_device_type": 1 00:09:06.351 }, 00:09:06.351 { 00:09:06.351 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:06.351 "dma_device_type": 2 00:09:06.351 } 00:09:06.351 ], 00:09:06.351 "driver_specific": { 00:09:06.351 "raid": { 00:09:06.351 "uuid": "6fb5ae95-a36b-4459-98e1-cac748753d68", 00:09:06.351 "strip_size_kb": 64, 00:09:06.352 "state": "online", 00:09:06.352 "raid_level": "concat", 00:09:06.352 "superblock": true, 00:09:06.352 "num_base_bdevs": 3, 00:09:06.352 "num_base_bdevs_discovered": 3, 00:09:06.352 "num_base_bdevs_operational": 3, 00:09:06.352 "base_bdevs_list": [ 00:09:06.352 { 00:09:06.352 "name": "pt1", 00:09:06.352 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:06.352 "is_configured": true, 00:09:06.352 "data_offset": 2048, 00:09:06.352 "data_size": 63488 00:09:06.352 }, 00:09:06.352 { 00:09:06.352 "name": "pt2", 00:09:06.352 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:06.352 "is_configured": true, 00:09:06.352 "data_offset": 2048, 00:09:06.352 "data_size": 63488 00:09:06.352 }, 00:09:06.352 { 00:09:06.352 "name": "pt3", 00:09:06.352 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:06.352 "is_configured": true, 00:09:06.352 "data_offset": 2048, 00:09:06.352 "data_size": 63488 00:09:06.352 } 00:09:06.352 ] 00:09:06.352 } 00:09:06.352 } 00:09:06.352 }' 00:09:06.352 13:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:06.352 13:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:06.352 pt2 00:09:06.352 pt3' 00:09:06.352 13:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:06.352 13:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:06.352 13:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:06.352 13:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:06.352 13:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:06.352 13:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.352 13:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.611 13:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.611 13:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:06.611 13:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:06.611 13:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:06.611 13:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:06.611 13:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:06.611 13:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.611 13:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.611 13:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.611 13:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:06.611 13:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:06.611 13:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:06.612 13:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:06.612 13:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.612 13:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:06.612 13:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.612 13:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.612 13:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:06.612 13:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:06.612 13:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:09:06.612 13:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:06.612 13:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.612 13:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.612 [2024-11-17 13:18:55.693723] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:06.612 13:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.612 13:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=6fb5ae95-a36b-4459-98e1-cac748753d68 00:09:06.612 13:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 6fb5ae95-a36b-4459-98e1-cac748753d68 ']' 00:09:06.612 13:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:06.612 13:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.612 13:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.612 [2024-11-17 13:18:55.741328] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:06.612 [2024-11-17 13:18:55.741357] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:06.612 [2024-11-17 13:18:55.741438] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:06.612 [2024-11-17 13:18:55.741501] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:06.612 [2024-11-17 13:18:55.741511] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:09:06.612 13:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.612 13:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:06.612 13:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.612 13:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:09:06.612 13:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.612 13:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.612 13:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:09:06.612 13:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:09:06.612 13:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:06.612 13:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:09:06.612 13:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.612 13:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.612 13:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.612 13:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:06.612 13:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:09:06.612 13:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.612 13:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.612 13:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.612 13:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:06.612 13:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:09:06.612 13:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.612 13:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.612 13:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.612 13:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:09:06.612 13:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.612 13:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.612 13:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:06.872 13:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.872 13:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:09:06.872 13:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:06.872 13:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:09:06.872 13:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:06.872 13:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:09:06.872 13:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:06.872 13:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:09:06.872 13:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:06.872 13:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:06.872 13:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.872 13:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.872 [2024-11-17 13:18:55.881189] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:06.872 [2024-11-17 13:18:55.883017] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:06.872 [2024-11-17 13:18:55.883067] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:09:06.872 [2024-11-17 13:18:55.883120] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:06.872 [2024-11-17 13:18:55.883180] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:06.872 [2024-11-17 13:18:55.883198] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:09:06.872 [2024-11-17 13:18:55.883231] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:06.872 [2024-11-17 13:18:55.883241] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:09:06.872 request: 00:09:06.872 { 00:09:06.872 "name": "raid_bdev1", 00:09:06.872 "raid_level": "concat", 00:09:06.872 "base_bdevs": [ 00:09:06.872 "malloc1", 00:09:06.872 "malloc2", 00:09:06.872 "malloc3" 00:09:06.872 ], 00:09:06.872 "strip_size_kb": 64, 00:09:06.872 "superblock": false, 00:09:06.872 "method": "bdev_raid_create", 00:09:06.872 "req_id": 1 00:09:06.872 } 00:09:06.872 Got JSON-RPC error response 00:09:06.872 response: 00:09:06.872 { 00:09:06.872 "code": -17, 00:09:06.872 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:06.872 } 00:09:06.872 13:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:06.872 13:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:09:06.872 13:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:06.872 13:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:06.872 13:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:06.872 13:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:06.872 13:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.872 13:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.872 13:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:09:06.872 13:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.872 13:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:09:06.872 13:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:09:06.872 13:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:06.872 13:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.872 13:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.872 [2024-11-17 13:18:55.945017] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:06.872 [2024-11-17 13:18:55.945177] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:06.872 [2024-11-17 13:18:55.945235] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:09:06.872 [2024-11-17 13:18:55.945271] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:06.872 [2024-11-17 13:18:55.947616] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:06.872 [2024-11-17 13:18:55.947701] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:06.872 [2024-11-17 13:18:55.947829] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:06.872 [2024-11-17 13:18:55.947930] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:06.872 pt1 00:09:06.872 13:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.872 13:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:09:06.872 13:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:06.872 13:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:06.872 13:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:06.872 13:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:06.872 13:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:06.872 13:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:06.872 13:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:06.872 13:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:06.872 13:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:06.872 13:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:06.872 13:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:06.872 13:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.872 13:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.872 13:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.872 13:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:06.872 "name": "raid_bdev1", 00:09:06.872 "uuid": "6fb5ae95-a36b-4459-98e1-cac748753d68", 00:09:06.872 "strip_size_kb": 64, 00:09:06.872 "state": "configuring", 00:09:06.872 "raid_level": "concat", 00:09:06.872 "superblock": true, 00:09:06.872 "num_base_bdevs": 3, 00:09:06.872 "num_base_bdevs_discovered": 1, 00:09:06.873 "num_base_bdevs_operational": 3, 00:09:06.873 "base_bdevs_list": [ 00:09:06.873 { 00:09:06.873 "name": "pt1", 00:09:06.873 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:06.873 "is_configured": true, 00:09:06.873 "data_offset": 2048, 00:09:06.873 "data_size": 63488 00:09:06.873 }, 00:09:06.873 { 00:09:06.873 "name": null, 00:09:06.873 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:06.873 "is_configured": false, 00:09:06.873 "data_offset": 2048, 00:09:06.873 "data_size": 63488 00:09:06.873 }, 00:09:06.873 { 00:09:06.873 "name": null, 00:09:06.873 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:06.873 "is_configured": false, 00:09:06.873 "data_offset": 2048, 00:09:06.873 "data_size": 63488 00:09:06.873 } 00:09:06.873 ] 00:09:06.873 }' 00:09:06.873 13:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:06.873 13:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.133 13:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:09:07.133 13:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:07.133 13:18:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.133 13:18:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.133 [2024-11-17 13:18:56.352784] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:07.133 [2024-11-17 13:18:56.352855] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:07.133 [2024-11-17 13:18:56.352882] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:09:07.133 [2024-11-17 13:18:56.352893] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:07.133 [2024-11-17 13:18:56.353411] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:07.133 [2024-11-17 13:18:56.353433] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:07.133 [2024-11-17 13:18:56.353532] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:07.133 [2024-11-17 13:18:56.353558] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:07.394 pt2 00:09:07.394 13:18:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.394 13:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:09:07.394 13:18:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.394 13:18:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.394 [2024-11-17 13:18:56.364812] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:09:07.394 13:18:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.394 13:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:09:07.394 13:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:07.394 13:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:07.394 13:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:07.394 13:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:07.394 13:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:07.394 13:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:07.394 13:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:07.394 13:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:07.394 13:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:07.394 13:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:07.394 13:18:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.394 13:18:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.394 13:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:07.394 13:18:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.394 13:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:07.394 "name": "raid_bdev1", 00:09:07.394 "uuid": "6fb5ae95-a36b-4459-98e1-cac748753d68", 00:09:07.394 "strip_size_kb": 64, 00:09:07.394 "state": "configuring", 00:09:07.394 "raid_level": "concat", 00:09:07.394 "superblock": true, 00:09:07.394 "num_base_bdevs": 3, 00:09:07.394 "num_base_bdevs_discovered": 1, 00:09:07.394 "num_base_bdevs_operational": 3, 00:09:07.394 "base_bdevs_list": [ 00:09:07.394 { 00:09:07.394 "name": "pt1", 00:09:07.394 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:07.394 "is_configured": true, 00:09:07.394 "data_offset": 2048, 00:09:07.394 "data_size": 63488 00:09:07.394 }, 00:09:07.394 { 00:09:07.394 "name": null, 00:09:07.394 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:07.394 "is_configured": false, 00:09:07.394 "data_offset": 0, 00:09:07.394 "data_size": 63488 00:09:07.394 }, 00:09:07.394 { 00:09:07.394 "name": null, 00:09:07.394 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:07.394 "is_configured": false, 00:09:07.394 "data_offset": 2048, 00:09:07.394 "data_size": 63488 00:09:07.394 } 00:09:07.394 ] 00:09:07.394 }' 00:09:07.394 13:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:07.394 13:18:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.654 13:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:09:07.654 13:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:07.654 13:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:07.654 13:18:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.654 13:18:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.654 [2024-11-17 13:18:56.832460] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:07.655 [2024-11-17 13:18:56.832621] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:07.655 [2024-11-17 13:18:56.832667] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:09:07.655 [2024-11-17 13:18:56.832682] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:07.655 [2024-11-17 13:18:56.833203] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:07.655 [2024-11-17 13:18:56.833240] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:07.655 [2024-11-17 13:18:56.833333] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:07.655 [2024-11-17 13:18:56.833360] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:07.655 pt2 00:09:07.655 13:18:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.655 13:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:07.655 13:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:07.655 13:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:07.655 13:18:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.655 13:18:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.655 [2024-11-17 13:18:56.844462] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:07.655 [2024-11-17 13:18:56.844531] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:07.655 [2024-11-17 13:18:56.844548] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:09:07.655 [2024-11-17 13:18:56.844560] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:07.655 [2024-11-17 13:18:56.845049] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:07.655 [2024-11-17 13:18:56.845074] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:07.655 [2024-11-17 13:18:56.845159] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:07.655 [2024-11-17 13:18:56.845186] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:07.655 [2024-11-17 13:18:56.845339] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:07.655 [2024-11-17 13:18:56.845352] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:07.655 [2024-11-17 13:18:56.845609] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:07.655 [2024-11-17 13:18:56.845796] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:07.655 [2024-11-17 13:18:56.845805] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:07.655 [2024-11-17 13:18:56.845959] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:07.655 pt3 00:09:07.655 13:18:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.655 13:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:07.655 13:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:07.655 13:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:07.655 13:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:07.655 13:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:07.655 13:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:07.655 13:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:07.655 13:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:07.655 13:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:07.655 13:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:07.655 13:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:07.655 13:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:07.655 13:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:07.655 13:18:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.655 13:18:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.655 13:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:07.655 13:18:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.926 13:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:07.926 "name": "raid_bdev1", 00:09:07.926 "uuid": "6fb5ae95-a36b-4459-98e1-cac748753d68", 00:09:07.926 "strip_size_kb": 64, 00:09:07.926 "state": "online", 00:09:07.926 "raid_level": "concat", 00:09:07.926 "superblock": true, 00:09:07.926 "num_base_bdevs": 3, 00:09:07.926 "num_base_bdevs_discovered": 3, 00:09:07.926 "num_base_bdevs_operational": 3, 00:09:07.926 "base_bdevs_list": [ 00:09:07.926 { 00:09:07.926 "name": "pt1", 00:09:07.926 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:07.926 "is_configured": true, 00:09:07.926 "data_offset": 2048, 00:09:07.926 "data_size": 63488 00:09:07.926 }, 00:09:07.926 { 00:09:07.926 "name": "pt2", 00:09:07.926 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:07.926 "is_configured": true, 00:09:07.926 "data_offset": 2048, 00:09:07.926 "data_size": 63488 00:09:07.926 }, 00:09:07.926 { 00:09:07.926 "name": "pt3", 00:09:07.926 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:07.926 "is_configured": true, 00:09:07.926 "data_offset": 2048, 00:09:07.926 "data_size": 63488 00:09:07.926 } 00:09:07.926 ] 00:09:07.926 }' 00:09:07.926 13:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:07.926 13:18:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.185 13:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:09:08.185 13:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:08.185 13:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:08.185 13:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:08.185 13:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:08.185 13:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:08.185 13:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:08.185 13:18:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.185 13:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:08.185 13:18:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.185 [2024-11-17 13:18:57.276020] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:08.185 13:18:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.185 13:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:08.185 "name": "raid_bdev1", 00:09:08.185 "aliases": [ 00:09:08.185 "6fb5ae95-a36b-4459-98e1-cac748753d68" 00:09:08.185 ], 00:09:08.185 "product_name": "Raid Volume", 00:09:08.185 "block_size": 512, 00:09:08.185 "num_blocks": 190464, 00:09:08.185 "uuid": "6fb5ae95-a36b-4459-98e1-cac748753d68", 00:09:08.185 "assigned_rate_limits": { 00:09:08.185 "rw_ios_per_sec": 0, 00:09:08.185 "rw_mbytes_per_sec": 0, 00:09:08.185 "r_mbytes_per_sec": 0, 00:09:08.185 "w_mbytes_per_sec": 0 00:09:08.185 }, 00:09:08.185 "claimed": false, 00:09:08.185 "zoned": false, 00:09:08.185 "supported_io_types": { 00:09:08.185 "read": true, 00:09:08.185 "write": true, 00:09:08.185 "unmap": true, 00:09:08.185 "flush": true, 00:09:08.185 "reset": true, 00:09:08.185 "nvme_admin": false, 00:09:08.185 "nvme_io": false, 00:09:08.185 "nvme_io_md": false, 00:09:08.185 "write_zeroes": true, 00:09:08.185 "zcopy": false, 00:09:08.185 "get_zone_info": false, 00:09:08.185 "zone_management": false, 00:09:08.185 "zone_append": false, 00:09:08.185 "compare": false, 00:09:08.185 "compare_and_write": false, 00:09:08.185 "abort": false, 00:09:08.185 "seek_hole": false, 00:09:08.185 "seek_data": false, 00:09:08.185 "copy": false, 00:09:08.185 "nvme_iov_md": false 00:09:08.185 }, 00:09:08.185 "memory_domains": [ 00:09:08.185 { 00:09:08.185 "dma_device_id": "system", 00:09:08.185 "dma_device_type": 1 00:09:08.185 }, 00:09:08.185 { 00:09:08.185 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:08.185 "dma_device_type": 2 00:09:08.185 }, 00:09:08.185 { 00:09:08.185 "dma_device_id": "system", 00:09:08.185 "dma_device_type": 1 00:09:08.185 }, 00:09:08.185 { 00:09:08.185 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:08.185 "dma_device_type": 2 00:09:08.185 }, 00:09:08.185 { 00:09:08.185 "dma_device_id": "system", 00:09:08.185 "dma_device_type": 1 00:09:08.185 }, 00:09:08.185 { 00:09:08.185 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:08.185 "dma_device_type": 2 00:09:08.185 } 00:09:08.185 ], 00:09:08.185 "driver_specific": { 00:09:08.185 "raid": { 00:09:08.185 "uuid": "6fb5ae95-a36b-4459-98e1-cac748753d68", 00:09:08.185 "strip_size_kb": 64, 00:09:08.185 "state": "online", 00:09:08.185 "raid_level": "concat", 00:09:08.185 "superblock": true, 00:09:08.185 "num_base_bdevs": 3, 00:09:08.185 "num_base_bdevs_discovered": 3, 00:09:08.185 "num_base_bdevs_operational": 3, 00:09:08.185 "base_bdevs_list": [ 00:09:08.185 { 00:09:08.185 "name": "pt1", 00:09:08.185 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:08.185 "is_configured": true, 00:09:08.185 "data_offset": 2048, 00:09:08.185 "data_size": 63488 00:09:08.185 }, 00:09:08.185 { 00:09:08.185 "name": "pt2", 00:09:08.185 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:08.185 "is_configured": true, 00:09:08.185 "data_offset": 2048, 00:09:08.185 "data_size": 63488 00:09:08.185 }, 00:09:08.185 { 00:09:08.185 "name": "pt3", 00:09:08.185 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:08.185 "is_configured": true, 00:09:08.185 "data_offset": 2048, 00:09:08.185 "data_size": 63488 00:09:08.185 } 00:09:08.185 ] 00:09:08.185 } 00:09:08.185 } 00:09:08.185 }' 00:09:08.185 13:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:08.185 13:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:08.185 pt2 00:09:08.185 pt3' 00:09:08.185 13:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:08.186 13:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:08.186 13:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:08.186 13:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:08.186 13:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:08.186 13:18:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.186 13:18:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.445 13:18:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.445 13:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:08.445 13:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:08.445 13:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:08.445 13:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:08.445 13:18:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.445 13:18:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.445 13:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:08.445 13:18:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.445 13:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:08.445 13:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:08.445 13:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:08.445 13:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:08.445 13:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:08.445 13:18:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.445 13:18:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.445 13:18:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.445 13:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:08.445 13:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:08.445 13:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:09:08.445 13:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:08.445 13:18:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.445 13:18:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.445 [2024-11-17 13:18:57.543520] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:08.445 13:18:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.445 13:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 6fb5ae95-a36b-4459-98e1-cac748753d68 '!=' 6fb5ae95-a36b-4459-98e1-cac748753d68 ']' 00:09:08.445 13:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:09:08.445 13:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:08.445 13:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:08.445 13:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 66783 00:09:08.445 13:18:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 66783 ']' 00:09:08.445 13:18:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 66783 00:09:08.445 13:18:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:09:08.445 13:18:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:08.445 13:18:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66783 00:09:08.445 13:18:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:08.445 13:18:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:08.445 13:18:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66783' 00:09:08.445 killing process with pid 66783 00:09:08.445 13:18:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 66783 00:09:08.445 [2024-11-17 13:18:57.610903] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:08.445 [2024-11-17 13:18:57.611052] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:08.445 13:18:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 66783 00:09:08.445 [2024-11-17 13:18:57.611163] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:08.445 [2024-11-17 13:18:57.611257] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:08.704 [2024-11-17 13:18:57.904954] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:10.081 13:18:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:09:10.081 00:09:10.081 real 0m5.124s 00:09:10.081 user 0m7.327s 00:09:10.081 sys 0m0.848s 00:09:10.081 13:18:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:10.081 13:18:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.081 ************************************ 00:09:10.081 END TEST raid_superblock_test 00:09:10.081 ************************************ 00:09:10.081 13:18:59 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 3 read 00:09:10.081 13:18:59 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:10.081 13:18:59 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:10.081 13:18:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:10.081 ************************************ 00:09:10.081 START TEST raid_read_error_test 00:09:10.081 ************************************ 00:09:10.081 13:18:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 3 read 00:09:10.081 13:18:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:09:10.081 13:18:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:10.081 13:18:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:09:10.081 13:18:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:10.081 13:18:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:10.082 13:18:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:10.082 13:18:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:10.082 13:18:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:10.082 13:18:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:10.082 13:18:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:10.082 13:18:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:10.082 13:18:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:10.082 13:18:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:10.082 13:18:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:10.082 13:18:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:10.082 13:18:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:10.082 13:18:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:10.082 13:18:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:10.082 13:18:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:10.082 13:18:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:10.082 13:18:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:10.082 13:18:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:09:10.082 13:18:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:10.082 13:18:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:10.082 13:18:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:10.082 13:18:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.qpg7dchVvC 00:09:10.082 13:18:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67032 00:09:10.082 13:18:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:10.082 13:18:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67032 00:09:10.082 13:18:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 67032 ']' 00:09:10.082 13:18:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:10.082 13:18:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:10.082 13:18:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:10.082 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:10.082 13:18:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:10.082 13:18:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.082 [2024-11-17 13:18:59.178689] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:09:10.082 [2024-11-17 13:18:59.178886] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67032 ] 00:09:10.344 [2024-11-17 13:18:59.354084] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:10.344 [2024-11-17 13:18:59.472192] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:10.609 [2024-11-17 13:18:59.685798] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:10.609 [2024-11-17 13:18:59.685934] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:10.869 13:19:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:10.869 13:19:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:10.869 13:19:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:10.869 13:19:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:10.869 13:19:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.869 13:19:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.869 BaseBdev1_malloc 00:09:10.869 13:19:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.869 13:19:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:10.869 13:19:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.869 13:19:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.869 true 00:09:10.869 13:19:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.869 13:19:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:10.869 13:19:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.869 13:19:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.869 [2024-11-17 13:19:00.083498] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:10.869 [2024-11-17 13:19:00.083553] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:10.869 [2024-11-17 13:19:00.083573] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:10.869 [2024-11-17 13:19:00.083583] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:10.869 [2024-11-17 13:19:00.085731] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:10.869 [2024-11-17 13:19:00.085774] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:10.869 BaseBdev1 00:09:10.869 13:19:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.869 13:19:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:10.869 13:19:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:10.869 13:19:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.869 13:19:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.129 BaseBdev2_malloc 00:09:11.129 13:19:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.129 13:19:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:11.129 13:19:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.129 13:19:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.129 true 00:09:11.129 13:19:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.129 13:19:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:11.129 13:19:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.129 13:19:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.129 [2024-11-17 13:19:00.150706] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:11.129 [2024-11-17 13:19:00.150802] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:11.129 [2024-11-17 13:19:00.150821] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:11.129 [2024-11-17 13:19:00.150832] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:11.129 [2024-11-17 13:19:00.152993] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:11.129 [2024-11-17 13:19:00.153036] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:11.129 BaseBdev2 00:09:11.129 13:19:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.129 13:19:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:11.129 13:19:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:11.129 13:19:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.129 13:19:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.129 BaseBdev3_malloc 00:09:11.129 13:19:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.129 13:19:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:11.129 13:19:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.129 13:19:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.129 true 00:09:11.129 13:19:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.129 13:19:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:11.129 13:19:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.129 13:19:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.129 [2024-11-17 13:19:00.233816] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:11.129 [2024-11-17 13:19:00.233868] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:11.129 [2024-11-17 13:19:00.233884] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:11.129 [2024-11-17 13:19:00.233895] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:11.129 [2024-11-17 13:19:00.236047] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:11.129 [2024-11-17 13:19:00.236087] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:11.129 BaseBdev3 00:09:11.129 13:19:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.129 13:19:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:11.129 13:19:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.129 13:19:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.129 [2024-11-17 13:19:00.245856] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:11.129 [2024-11-17 13:19:00.247660] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:11.129 [2024-11-17 13:19:00.247753] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:11.129 [2024-11-17 13:19:00.247946] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:11.129 [2024-11-17 13:19:00.247957] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:11.129 [2024-11-17 13:19:00.248184] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:09:11.129 [2024-11-17 13:19:00.248351] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:11.129 [2024-11-17 13:19:00.248399] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:11.130 [2024-11-17 13:19:00.248574] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:11.130 13:19:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.130 13:19:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:11.130 13:19:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:11.130 13:19:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:11.130 13:19:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:11.130 13:19:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:11.130 13:19:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:11.130 13:19:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:11.130 13:19:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:11.130 13:19:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:11.130 13:19:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:11.130 13:19:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.130 13:19:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:11.130 13:19:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.130 13:19:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.130 13:19:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.130 13:19:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:11.130 "name": "raid_bdev1", 00:09:11.130 "uuid": "64ed652e-144d-4e92-b88f-cc4c721de726", 00:09:11.130 "strip_size_kb": 64, 00:09:11.130 "state": "online", 00:09:11.130 "raid_level": "concat", 00:09:11.130 "superblock": true, 00:09:11.130 "num_base_bdevs": 3, 00:09:11.130 "num_base_bdevs_discovered": 3, 00:09:11.130 "num_base_bdevs_operational": 3, 00:09:11.130 "base_bdevs_list": [ 00:09:11.130 { 00:09:11.130 "name": "BaseBdev1", 00:09:11.130 "uuid": "57503b1e-3c3f-5f71-ab27-a40cb72e98f8", 00:09:11.130 "is_configured": true, 00:09:11.130 "data_offset": 2048, 00:09:11.130 "data_size": 63488 00:09:11.130 }, 00:09:11.130 { 00:09:11.130 "name": "BaseBdev2", 00:09:11.130 "uuid": "6b9ae6dd-81d6-567d-aa21-b6d99e595e29", 00:09:11.130 "is_configured": true, 00:09:11.130 "data_offset": 2048, 00:09:11.130 "data_size": 63488 00:09:11.130 }, 00:09:11.130 { 00:09:11.130 "name": "BaseBdev3", 00:09:11.130 "uuid": "5aa461f6-f259-58b7-92ae-94ceb5b67bdf", 00:09:11.130 "is_configured": true, 00:09:11.130 "data_offset": 2048, 00:09:11.130 "data_size": 63488 00:09:11.130 } 00:09:11.130 ] 00:09:11.130 }' 00:09:11.130 13:19:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:11.130 13:19:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.699 13:19:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:11.699 13:19:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:11.699 [2024-11-17 13:19:00.770247] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:09:12.640 13:19:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:12.640 13:19:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.640 13:19:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.640 13:19:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.640 13:19:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:12.640 13:19:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:09:12.640 13:19:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:12.640 13:19:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:12.640 13:19:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:12.640 13:19:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:12.640 13:19:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:12.640 13:19:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:12.640 13:19:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:12.640 13:19:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:12.640 13:19:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:12.640 13:19:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:12.640 13:19:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:12.640 13:19:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:12.640 13:19:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:12.640 13:19:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.640 13:19:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.640 13:19:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.640 13:19:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:12.640 "name": "raid_bdev1", 00:09:12.640 "uuid": "64ed652e-144d-4e92-b88f-cc4c721de726", 00:09:12.640 "strip_size_kb": 64, 00:09:12.640 "state": "online", 00:09:12.640 "raid_level": "concat", 00:09:12.640 "superblock": true, 00:09:12.640 "num_base_bdevs": 3, 00:09:12.640 "num_base_bdevs_discovered": 3, 00:09:12.640 "num_base_bdevs_operational": 3, 00:09:12.640 "base_bdevs_list": [ 00:09:12.640 { 00:09:12.640 "name": "BaseBdev1", 00:09:12.640 "uuid": "57503b1e-3c3f-5f71-ab27-a40cb72e98f8", 00:09:12.640 "is_configured": true, 00:09:12.640 "data_offset": 2048, 00:09:12.640 "data_size": 63488 00:09:12.640 }, 00:09:12.640 { 00:09:12.640 "name": "BaseBdev2", 00:09:12.640 "uuid": "6b9ae6dd-81d6-567d-aa21-b6d99e595e29", 00:09:12.640 "is_configured": true, 00:09:12.640 "data_offset": 2048, 00:09:12.640 "data_size": 63488 00:09:12.640 }, 00:09:12.640 { 00:09:12.640 "name": "BaseBdev3", 00:09:12.640 "uuid": "5aa461f6-f259-58b7-92ae-94ceb5b67bdf", 00:09:12.640 "is_configured": true, 00:09:12.640 "data_offset": 2048, 00:09:12.640 "data_size": 63488 00:09:12.640 } 00:09:12.640 ] 00:09:12.640 }' 00:09:12.640 13:19:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:12.640 13:19:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.210 13:19:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:13.210 13:19:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.210 13:19:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.210 [2024-11-17 13:19:02.182547] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:13.210 [2024-11-17 13:19:02.182582] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:13.210 [2024-11-17 13:19:02.185240] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:13.210 [2024-11-17 13:19:02.185288] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:13.210 [2024-11-17 13:19:02.185326] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:13.210 [2024-11-17 13:19:02.185338] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:13.210 { 00:09:13.210 "results": [ 00:09:13.210 { 00:09:13.210 "job": "raid_bdev1", 00:09:13.210 "core_mask": "0x1", 00:09:13.210 "workload": "randrw", 00:09:13.210 "percentage": 50, 00:09:13.210 "status": "finished", 00:09:13.210 "queue_depth": 1, 00:09:13.210 "io_size": 131072, 00:09:13.210 "runtime": 1.41323, 00:09:13.210 "iops": 15990.320046984567, 00:09:13.210 "mibps": 1998.7900058730709, 00:09:13.210 "io_failed": 1, 00:09:13.210 "io_timeout": 0, 00:09:13.210 "avg_latency_us": 86.81802243829237, 00:09:13.210 "min_latency_us": 25.9353711790393, 00:09:13.210 "max_latency_us": 1438.071615720524 00:09:13.210 } 00:09:13.210 ], 00:09:13.210 "core_count": 1 00:09:13.210 } 00:09:13.210 13:19:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.210 13:19:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67032 00:09:13.210 13:19:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 67032 ']' 00:09:13.210 13:19:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 67032 00:09:13.210 13:19:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:09:13.210 13:19:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:13.210 13:19:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67032 00:09:13.210 killing process with pid 67032 00:09:13.210 13:19:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:13.210 13:19:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:13.210 13:19:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67032' 00:09:13.210 13:19:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 67032 00:09:13.210 [2024-11-17 13:19:02.228629] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:13.210 13:19:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 67032 00:09:13.470 [2024-11-17 13:19:02.458060] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:14.410 13:19:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.qpg7dchVvC 00:09:14.410 13:19:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:14.410 13:19:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:14.410 13:19:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:09:14.410 13:19:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:09:14.410 ************************************ 00:09:14.410 END TEST raid_read_error_test 00:09:14.410 ************************************ 00:09:14.410 13:19:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:14.410 13:19:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:14.410 13:19:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:09:14.410 00:09:14.410 real 0m4.545s 00:09:14.410 user 0m5.406s 00:09:14.410 sys 0m0.575s 00:09:14.410 13:19:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:14.411 13:19:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.671 13:19:03 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 3 write 00:09:14.671 13:19:03 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:14.671 13:19:03 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:14.671 13:19:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:14.671 ************************************ 00:09:14.671 START TEST raid_write_error_test 00:09:14.671 ************************************ 00:09:14.671 13:19:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 3 write 00:09:14.671 13:19:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:09:14.671 13:19:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:14.671 13:19:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:14.671 13:19:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:14.671 13:19:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:14.671 13:19:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:14.671 13:19:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:14.671 13:19:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:14.671 13:19:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:14.671 13:19:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:14.671 13:19:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:14.671 13:19:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:14.671 13:19:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:14.671 13:19:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:14.671 13:19:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:14.671 13:19:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:14.671 13:19:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:14.671 13:19:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:14.671 13:19:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:14.671 13:19:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:14.671 13:19:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:14.671 13:19:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:09:14.671 13:19:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:14.671 13:19:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:14.671 13:19:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:14.671 13:19:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.7mgBVqQQdk 00:09:14.671 13:19:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67182 00:09:14.671 13:19:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67182 00:09:14.671 13:19:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 67182 ']' 00:09:14.671 13:19:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:14.671 13:19:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:14.671 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:14.671 13:19:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:14.671 13:19:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:14.671 13:19:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:14.671 13:19:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.671 [2024-11-17 13:19:03.785972] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:09:14.671 [2024-11-17 13:19:03.786195] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67182 ] 00:09:14.931 [2024-11-17 13:19:03.956896] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:14.931 [2024-11-17 13:19:04.072884] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:15.191 [2024-11-17 13:19:04.274404] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:15.191 [2024-11-17 13:19:04.274456] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:15.451 13:19:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:15.451 13:19:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:15.451 13:19:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:15.451 13:19:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:15.451 13:19:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.451 13:19:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.451 BaseBdev1_malloc 00:09:15.451 13:19:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.451 13:19:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:15.451 13:19:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.451 13:19:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.710 true 00:09:15.710 13:19:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.710 13:19:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:15.710 13:19:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.710 13:19:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.710 [2024-11-17 13:19:04.683261] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:15.710 [2024-11-17 13:19:04.683369] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:15.710 [2024-11-17 13:19:04.683408] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:15.710 [2024-11-17 13:19:04.683439] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:15.710 [2024-11-17 13:19:04.685698] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:15.710 [2024-11-17 13:19:04.685779] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:15.710 BaseBdev1 00:09:15.710 13:19:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.710 13:19:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:15.710 13:19:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:15.710 13:19:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.710 13:19:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.710 BaseBdev2_malloc 00:09:15.710 13:19:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.711 13:19:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:15.711 13:19:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.711 13:19:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.711 true 00:09:15.711 13:19:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.711 13:19:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:15.711 13:19:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.711 13:19:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.711 [2024-11-17 13:19:04.752856] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:15.711 [2024-11-17 13:19:04.752964] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:15.711 [2024-11-17 13:19:04.752987] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:15.711 [2024-11-17 13:19:04.752998] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:15.711 [2024-11-17 13:19:04.755066] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:15.711 [2024-11-17 13:19:04.755109] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:15.711 BaseBdev2 00:09:15.711 13:19:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.711 13:19:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:15.711 13:19:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:15.711 13:19:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.711 13:19:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.711 BaseBdev3_malloc 00:09:15.711 13:19:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.711 13:19:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:15.711 13:19:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.711 13:19:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.711 true 00:09:15.711 13:19:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.711 13:19:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:15.711 13:19:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.711 13:19:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.711 [2024-11-17 13:19:04.831995] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:15.711 [2024-11-17 13:19:04.832090] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:15.711 [2024-11-17 13:19:04.832122] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:15.711 [2024-11-17 13:19:04.832156] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:15.711 [2024-11-17 13:19:04.834236] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:15.711 [2024-11-17 13:19:04.834311] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:15.711 BaseBdev3 00:09:15.711 13:19:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.711 13:19:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:15.711 13:19:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.711 13:19:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.711 [2024-11-17 13:19:04.844057] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:15.711 [2024-11-17 13:19:04.845946] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:15.711 [2024-11-17 13:19:04.846073] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:15.711 [2024-11-17 13:19:04.846357] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:15.711 [2024-11-17 13:19:04.846408] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:15.711 [2024-11-17 13:19:04.846713] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:09:15.711 [2024-11-17 13:19:04.846870] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:15.711 [2024-11-17 13:19:04.846884] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:15.711 [2024-11-17 13:19:04.847021] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:15.711 13:19:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.711 13:19:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:15.711 13:19:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:15.711 13:19:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:15.711 13:19:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:15.711 13:19:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:15.711 13:19:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:15.711 13:19:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:15.711 13:19:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:15.711 13:19:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:15.711 13:19:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:15.711 13:19:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:15.711 13:19:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:15.711 13:19:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.711 13:19:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.711 13:19:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.711 13:19:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:15.711 "name": "raid_bdev1", 00:09:15.711 "uuid": "08702f76-4932-409b-8a18-cbe0bb62c578", 00:09:15.711 "strip_size_kb": 64, 00:09:15.711 "state": "online", 00:09:15.711 "raid_level": "concat", 00:09:15.711 "superblock": true, 00:09:15.711 "num_base_bdevs": 3, 00:09:15.711 "num_base_bdevs_discovered": 3, 00:09:15.711 "num_base_bdevs_operational": 3, 00:09:15.711 "base_bdevs_list": [ 00:09:15.711 { 00:09:15.711 "name": "BaseBdev1", 00:09:15.711 "uuid": "ea257dce-1018-5e84-bd6f-335b823de37d", 00:09:15.711 "is_configured": true, 00:09:15.711 "data_offset": 2048, 00:09:15.711 "data_size": 63488 00:09:15.711 }, 00:09:15.711 { 00:09:15.711 "name": "BaseBdev2", 00:09:15.711 "uuid": "aee6e00e-2a00-591d-b3b1-c06d2487f7b0", 00:09:15.711 "is_configured": true, 00:09:15.711 "data_offset": 2048, 00:09:15.711 "data_size": 63488 00:09:15.711 }, 00:09:15.711 { 00:09:15.711 "name": "BaseBdev3", 00:09:15.711 "uuid": "dc196d97-0363-5741-a090-074e9470aee7", 00:09:15.711 "is_configured": true, 00:09:15.711 "data_offset": 2048, 00:09:15.711 "data_size": 63488 00:09:15.711 } 00:09:15.711 ] 00:09:15.711 }' 00:09:15.711 13:19:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:15.711 13:19:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.280 13:19:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:16.280 13:19:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:16.280 [2024-11-17 13:19:05.400360] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:09:17.220 13:19:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:17.220 13:19:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.220 13:19:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.220 13:19:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.220 13:19:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:17.220 13:19:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:09:17.220 13:19:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:17.220 13:19:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:17.220 13:19:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:17.220 13:19:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:17.220 13:19:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:17.220 13:19:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:17.220 13:19:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:17.220 13:19:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:17.220 13:19:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:17.220 13:19:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:17.220 13:19:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:17.220 13:19:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:17.221 13:19:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:17.221 13:19:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.221 13:19:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.221 13:19:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.221 13:19:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:17.221 "name": "raid_bdev1", 00:09:17.221 "uuid": "08702f76-4932-409b-8a18-cbe0bb62c578", 00:09:17.221 "strip_size_kb": 64, 00:09:17.221 "state": "online", 00:09:17.221 "raid_level": "concat", 00:09:17.221 "superblock": true, 00:09:17.221 "num_base_bdevs": 3, 00:09:17.221 "num_base_bdevs_discovered": 3, 00:09:17.221 "num_base_bdevs_operational": 3, 00:09:17.221 "base_bdevs_list": [ 00:09:17.221 { 00:09:17.221 "name": "BaseBdev1", 00:09:17.221 "uuid": "ea257dce-1018-5e84-bd6f-335b823de37d", 00:09:17.221 "is_configured": true, 00:09:17.221 "data_offset": 2048, 00:09:17.221 "data_size": 63488 00:09:17.221 }, 00:09:17.221 { 00:09:17.221 "name": "BaseBdev2", 00:09:17.221 "uuid": "aee6e00e-2a00-591d-b3b1-c06d2487f7b0", 00:09:17.221 "is_configured": true, 00:09:17.221 "data_offset": 2048, 00:09:17.221 "data_size": 63488 00:09:17.221 }, 00:09:17.221 { 00:09:17.221 "name": "BaseBdev3", 00:09:17.221 "uuid": "dc196d97-0363-5741-a090-074e9470aee7", 00:09:17.221 "is_configured": true, 00:09:17.221 "data_offset": 2048, 00:09:17.221 "data_size": 63488 00:09:17.221 } 00:09:17.221 ] 00:09:17.221 }' 00:09:17.221 13:19:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:17.221 13:19:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.845 13:19:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:17.845 13:19:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.845 13:19:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.845 [2024-11-17 13:19:06.817079] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:17.845 [2024-11-17 13:19:06.817157] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:17.845 [2024-11-17 13:19:06.819924] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:17.845 [2024-11-17 13:19:06.820013] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:17.845 [2024-11-17 13:19:06.820071] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:17.845 [2024-11-17 13:19:06.820128] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:17.845 { 00:09:17.845 "results": [ 00:09:17.845 { 00:09:17.845 "job": "raid_bdev1", 00:09:17.845 "core_mask": "0x1", 00:09:17.845 "workload": "randrw", 00:09:17.845 "percentage": 50, 00:09:17.845 "status": "finished", 00:09:17.845 "queue_depth": 1, 00:09:17.845 "io_size": 131072, 00:09:17.845 "runtime": 1.417621, 00:09:17.845 "iops": 15935.147687569526, 00:09:17.845 "mibps": 1991.8934609461908, 00:09:17.845 "io_failed": 1, 00:09:17.845 "io_timeout": 0, 00:09:17.845 "avg_latency_us": 87.24586732089276, 00:09:17.845 "min_latency_us": 26.606113537117903, 00:09:17.845 "max_latency_us": 1466.6899563318777 00:09:17.845 } 00:09:17.845 ], 00:09:17.845 "core_count": 1 00:09:17.845 } 00:09:17.845 13:19:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.845 13:19:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67182 00:09:17.845 13:19:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 67182 ']' 00:09:17.845 13:19:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 67182 00:09:17.845 13:19:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:09:17.845 13:19:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:17.845 13:19:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67182 00:09:17.845 13:19:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:17.845 killing process with pid 67182 00:09:17.845 13:19:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:17.845 13:19:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67182' 00:09:17.845 13:19:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 67182 00:09:17.845 [2024-11-17 13:19:06.866477] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:17.845 13:19:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 67182 00:09:18.105 [2024-11-17 13:19:07.100157] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:19.044 13:19:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:19.044 13:19:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.7mgBVqQQdk 00:09:19.044 13:19:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:19.044 13:19:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:09:19.303 13:19:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:09:19.303 13:19:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:19.303 13:19:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:19.303 13:19:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:09:19.303 00:09:19.303 real 0m4.588s 00:09:19.303 user 0m5.505s 00:09:19.303 sys 0m0.569s 00:09:19.303 13:19:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:19.303 ************************************ 00:09:19.303 END TEST raid_write_error_test 00:09:19.303 ************************************ 00:09:19.303 13:19:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.303 13:19:08 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:19.303 13:19:08 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:09:19.303 13:19:08 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:19.303 13:19:08 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:19.303 13:19:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:19.303 ************************************ 00:09:19.303 START TEST raid_state_function_test 00:09:19.303 ************************************ 00:09:19.303 13:19:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 3 false 00:09:19.303 13:19:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:09:19.303 13:19:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:19.303 13:19:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:19.303 13:19:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:19.303 13:19:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:19.303 13:19:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:19.304 13:19:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:19.304 13:19:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:19.304 13:19:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:19.304 13:19:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:19.304 13:19:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:19.304 13:19:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:19.304 13:19:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:19.304 13:19:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:19.304 13:19:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:19.304 13:19:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:19.304 13:19:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:19.304 13:19:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:19.304 13:19:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:19.304 13:19:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:19.304 13:19:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:19.304 13:19:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:09:19.304 13:19:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:09:19.304 13:19:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:19.304 13:19:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:19.304 13:19:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=67321 00:09:19.304 13:19:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:19.304 13:19:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 67321' 00:09:19.304 Process raid pid: 67321 00:09:19.304 13:19:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 67321 00:09:19.304 13:19:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 67321 ']' 00:09:19.304 13:19:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:19.304 13:19:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:19.304 13:19:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:19.304 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:19.304 13:19:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:19.304 13:19:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.304 [2024-11-17 13:19:08.449619] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:09:19.304 [2024-11-17 13:19:08.449832] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:19.563 [2024-11-17 13:19:08.628253] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:19.564 [2024-11-17 13:19:08.743266] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:19.823 [2024-11-17 13:19:08.944169] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:19.823 [2024-11-17 13:19:08.944220] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:20.083 13:19:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:20.083 13:19:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:09:20.083 13:19:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:20.083 13:19:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.083 13:19:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.083 [2024-11-17 13:19:09.270533] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:20.083 [2024-11-17 13:19:09.270627] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:20.083 [2024-11-17 13:19:09.270657] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:20.083 [2024-11-17 13:19:09.270679] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:20.083 [2024-11-17 13:19:09.270698] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:20.083 [2024-11-17 13:19:09.270718] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:20.083 13:19:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.083 13:19:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:20.083 13:19:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:20.083 13:19:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:20.083 13:19:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:20.083 13:19:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:20.083 13:19:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:20.083 13:19:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:20.083 13:19:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:20.083 13:19:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:20.083 13:19:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:20.083 13:19:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:20.083 13:19:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:20.083 13:19:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.083 13:19:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.083 13:19:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.342 13:19:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:20.342 "name": "Existed_Raid", 00:09:20.342 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:20.342 "strip_size_kb": 0, 00:09:20.342 "state": "configuring", 00:09:20.342 "raid_level": "raid1", 00:09:20.342 "superblock": false, 00:09:20.342 "num_base_bdevs": 3, 00:09:20.342 "num_base_bdevs_discovered": 0, 00:09:20.342 "num_base_bdevs_operational": 3, 00:09:20.342 "base_bdevs_list": [ 00:09:20.342 { 00:09:20.342 "name": "BaseBdev1", 00:09:20.342 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:20.342 "is_configured": false, 00:09:20.342 "data_offset": 0, 00:09:20.342 "data_size": 0 00:09:20.342 }, 00:09:20.342 { 00:09:20.342 "name": "BaseBdev2", 00:09:20.342 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:20.342 "is_configured": false, 00:09:20.342 "data_offset": 0, 00:09:20.342 "data_size": 0 00:09:20.342 }, 00:09:20.342 { 00:09:20.342 "name": "BaseBdev3", 00:09:20.342 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:20.342 "is_configured": false, 00:09:20.342 "data_offset": 0, 00:09:20.342 "data_size": 0 00:09:20.342 } 00:09:20.342 ] 00:09:20.342 }' 00:09:20.342 13:19:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:20.342 13:19:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.602 13:19:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:20.602 13:19:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.602 13:19:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.602 [2024-11-17 13:19:09.677786] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:20.602 [2024-11-17 13:19:09.677873] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:20.602 13:19:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.602 13:19:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:20.602 13:19:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.602 13:19:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.602 [2024-11-17 13:19:09.689772] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:20.602 [2024-11-17 13:19:09.689822] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:20.602 [2024-11-17 13:19:09.689833] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:20.602 [2024-11-17 13:19:09.689843] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:20.602 [2024-11-17 13:19:09.689850] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:20.602 [2024-11-17 13:19:09.689860] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:20.602 13:19:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.602 13:19:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:20.602 13:19:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.602 13:19:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.602 [2024-11-17 13:19:09.738090] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:20.602 BaseBdev1 00:09:20.602 13:19:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.602 13:19:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:20.602 13:19:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:20.602 13:19:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:20.602 13:19:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:20.602 13:19:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:20.602 13:19:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:20.602 13:19:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:20.602 13:19:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.602 13:19:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.602 13:19:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.602 13:19:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:20.602 13:19:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.602 13:19:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.602 [ 00:09:20.602 { 00:09:20.602 "name": "BaseBdev1", 00:09:20.602 "aliases": [ 00:09:20.602 "c31e1d6b-a95c-4355-a688-9c267a35e5e8" 00:09:20.602 ], 00:09:20.602 "product_name": "Malloc disk", 00:09:20.602 "block_size": 512, 00:09:20.602 "num_blocks": 65536, 00:09:20.602 "uuid": "c31e1d6b-a95c-4355-a688-9c267a35e5e8", 00:09:20.602 "assigned_rate_limits": { 00:09:20.602 "rw_ios_per_sec": 0, 00:09:20.602 "rw_mbytes_per_sec": 0, 00:09:20.602 "r_mbytes_per_sec": 0, 00:09:20.602 "w_mbytes_per_sec": 0 00:09:20.602 }, 00:09:20.602 "claimed": true, 00:09:20.602 "claim_type": "exclusive_write", 00:09:20.602 "zoned": false, 00:09:20.602 "supported_io_types": { 00:09:20.602 "read": true, 00:09:20.602 "write": true, 00:09:20.602 "unmap": true, 00:09:20.602 "flush": true, 00:09:20.602 "reset": true, 00:09:20.602 "nvme_admin": false, 00:09:20.602 "nvme_io": false, 00:09:20.602 "nvme_io_md": false, 00:09:20.602 "write_zeroes": true, 00:09:20.602 "zcopy": true, 00:09:20.602 "get_zone_info": false, 00:09:20.602 "zone_management": false, 00:09:20.602 "zone_append": false, 00:09:20.602 "compare": false, 00:09:20.602 "compare_and_write": false, 00:09:20.602 "abort": true, 00:09:20.602 "seek_hole": false, 00:09:20.602 "seek_data": false, 00:09:20.602 "copy": true, 00:09:20.602 "nvme_iov_md": false 00:09:20.602 }, 00:09:20.602 "memory_domains": [ 00:09:20.602 { 00:09:20.602 "dma_device_id": "system", 00:09:20.602 "dma_device_type": 1 00:09:20.602 }, 00:09:20.602 { 00:09:20.602 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:20.602 "dma_device_type": 2 00:09:20.602 } 00:09:20.602 ], 00:09:20.602 "driver_specific": {} 00:09:20.602 } 00:09:20.602 ] 00:09:20.602 13:19:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.602 13:19:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:20.602 13:19:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:20.602 13:19:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:20.602 13:19:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:20.602 13:19:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:20.602 13:19:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:20.602 13:19:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:20.602 13:19:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:20.602 13:19:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:20.602 13:19:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:20.602 13:19:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:20.602 13:19:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:20.602 13:19:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.602 13:19:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:20.602 13:19:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.602 13:19:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.862 13:19:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:20.862 "name": "Existed_Raid", 00:09:20.862 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:20.862 "strip_size_kb": 0, 00:09:20.862 "state": "configuring", 00:09:20.862 "raid_level": "raid1", 00:09:20.862 "superblock": false, 00:09:20.862 "num_base_bdevs": 3, 00:09:20.862 "num_base_bdevs_discovered": 1, 00:09:20.862 "num_base_bdevs_operational": 3, 00:09:20.862 "base_bdevs_list": [ 00:09:20.862 { 00:09:20.862 "name": "BaseBdev1", 00:09:20.862 "uuid": "c31e1d6b-a95c-4355-a688-9c267a35e5e8", 00:09:20.862 "is_configured": true, 00:09:20.862 "data_offset": 0, 00:09:20.862 "data_size": 65536 00:09:20.862 }, 00:09:20.862 { 00:09:20.862 "name": "BaseBdev2", 00:09:20.862 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:20.862 "is_configured": false, 00:09:20.862 "data_offset": 0, 00:09:20.862 "data_size": 0 00:09:20.862 }, 00:09:20.862 { 00:09:20.862 "name": "BaseBdev3", 00:09:20.862 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:20.862 "is_configured": false, 00:09:20.862 "data_offset": 0, 00:09:20.862 "data_size": 0 00:09:20.862 } 00:09:20.862 ] 00:09:20.862 }' 00:09:20.862 13:19:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:20.862 13:19:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.122 13:19:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:21.122 13:19:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.122 13:19:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.122 [2024-11-17 13:19:10.225356] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:21.122 [2024-11-17 13:19:10.225413] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:21.122 13:19:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.122 13:19:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:21.122 13:19:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.122 13:19:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.122 [2024-11-17 13:19:10.233382] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:21.122 [2024-11-17 13:19:10.235245] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:21.122 [2024-11-17 13:19:10.235286] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:21.122 [2024-11-17 13:19:10.235296] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:21.122 [2024-11-17 13:19:10.235305] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:21.122 13:19:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.122 13:19:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:21.122 13:19:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:21.122 13:19:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:21.122 13:19:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:21.122 13:19:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:21.122 13:19:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:21.122 13:19:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:21.122 13:19:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:21.122 13:19:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:21.122 13:19:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:21.122 13:19:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:21.122 13:19:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:21.122 13:19:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:21.122 13:19:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.122 13:19:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.122 13:19:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:21.122 13:19:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.122 13:19:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:21.122 "name": "Existed_Raid", 00:09:21.122 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:21.122 "strip_size_kb": 0, 00:09:21.122 "state": "configuring", 00:09:21.122 "raid_level": "raid1", 00:09:21.122 "superblock": false, 00:09:21.122 "num_base_bdevs": 3, 00:09:21.122 "num_base_bdevs_discovered": 1, 00:09:21.122 "num_base_bdevs_operational": 3, 00:09:21.122 "base_bdevs_list": [ 00:09:21.122 { 00:09:21.122 "name": "BaseBdev1", 00:09:21.122 "uuid": "c31e1d6b-a95c-4355-a688-9c267a35e5e8", 00:09:21.122 "is_configured": true, 00:09:21.122 "data_offset": 0, 00:09:21.122 "data_size": 65536 00:09:21.122 }, 00:09:21.122 { 00:09:21.122 "name": "BaseBdev2", 00:09:21.122 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:21.122 "is_configured": false, 00:09:21.122 "data_offset": 0, 00:09:21.122 "data_size": 0 00:09:21.122 }, 00:09:21.122 { 00:09:21.122 "name": "BaseBdev3", 00:09:21.122 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:21.122 "is_configured": false, 00:09:21.122 "data_offset": 0, 00:09:21.122 "data_size": 0 00:09:21.122 } 00:09:21.122 ] 00:09:21.122 }' 00:09:21.122 13:19:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:21.122 13:19:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.692 13:19:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:21.692 13:19:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.692 13:19:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.692 [2024-11-17 13:19:10.726439] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:21.692 BaseBdev2 00:09:21.692 13:19:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.692 13:19:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:21.692 13:19:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:21.692 13:19:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:21.692 13:19:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:21.692 13:19:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:21.692 13:19:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:21.692 13:19:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:21.692 13:19:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.692 13:19:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.692 13:19:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.692 13:19:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:21.692 13:19:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.692 13:19:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.692 [ 00:09:21.692 { 00:09:21.692 "name": "BaseBdev2", 00:09:21.692 "aliases": [ 00:09:21.692 "338c86a2-eede-4c33-b8c7-7b50faaefce7" 00:09:21.692 ], 00:09:21.692 "product_name": "Malloc disk", 00:09:21.692 "block_size": 512, 00:09:21.692 "num_blocks": 65536, 00:09:21.692 "uuid": "338c86a2-eede-4c33-b8c7-7b50faaefce7", 00:09:21.692 "assigned_rate_limits": { 00:09:21.692 "rw_ios_per_sec": 0, 00:09:21.692 "rw_mbytes_per_sec": 0, 00:09:21.692 "r_mbytes_per_sec": 0, 00:09:21.692 "w_mbytes_per_sec": 0 00:09:21.692 }, 00:09:21.692 "claimed": true, 00:09:21.692 "claim_type": "exclusive_write", 00:09:21.692 "zoned": false, 00:09:21.692 "supported_io_types": { 00:09:21.692 "read": true, 00:09:21.692 "write": true, 00:09:21.692 "unmap": true, 00:09:21.692 "flush": true, 00:09:21.692 "reset": true, 00:09:21.692 "nvme_admin": false, 00:09:21.692 "nvme_io": false, 00:09:21.692 "nvme_io_md": false, 00:09:21.692 "write_zeroes": true, 00:09:21.692 "zcopy": true, 00:09:21.692 "get_zone_info": false, 00:09:21.692 "zone_management": false, 00:09:21.692 "zone_append": false, 00:09:21.692 "compare": false, 00:09:21.692 "compare_and_write": false, 00:09:21.692 "abort": true, 00:09:21.692 "seek_hole": false, 00:09:21.692 "seek_data": false, 00:09:21.692 "copy": true, 00:09:21.692 "nvme_iov_md": false 00:09:21.692 }, 00:09:21.692 "memory_domains": [ 00:09:21.692 { 00:09:21.692 "dma_device_id": "system", 00:09:21.692 "dma_device_type": 1 00:09:21.692 }, 00:09:21.692 { 00:09:21.692 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:21.692 "dma_device_type": 2 00:09:21.692 } 00:09:21.692 ], 00:09:21.692 "driver_specific": {} 00:09:21.692 } 00:09:21.692 ] 00:09:21.692 13:19:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.692 13:19:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:21.692 13:19:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:21.692 13:19:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:21.692 13:19:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:21.692 13:19:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:21.692 13:19:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:21.692 13:19:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:21.692 13:19:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:21.692 13:19:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:21.693 13:19:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:21.693 13:19:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:21.693 13:19:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:21.693 13:19:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:21.693 13:19:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:21.693 13:19:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.693 13:19:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:21.693 13:19:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.693 13:19:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.693 13:19:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:21.693 "name": "Existed_Raid", 00:09:21.693 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:21.693 "strip_size_kb": 0, 00:09:21.693 "state": "configuring", 00:09:21.693 "raid_level": "raid1", 00:09:21.693 "superblock": false, 00:09:21.693 "num_base_bdevs": 3, 00:09:21.693 "num_base_bdevs_discovered": 2, 00:09:21.693 "num_base_bdevs_operational": 3, 00:09:21.693 "base_bdevs_list": [ 00:09:21.693 { 00:09:21.693 "name": "BaseBdev1", 00:09:21.693 "uuid": "c31e1d6b-a95c-4355-a688-9c267a35e5e8", 00:09:21.693 "is_configured": true, 00:09:21.693 "data_offset": 0, 00:09:21.693 "data_size": 65536 00:09:21.693 }, 00:09:21.693 { 00:09:21.693 "name": "BaseBdev2", 00:09:21.693 "uuid": "338c86a2-eede-4c33-b8c7-7b50faaefce7", 00:09:21.693 "is_configured": true, 00:09:21.693 "data_offset": 0, 00:09:21.693 "data_size": 65536 00:09:21.693 }, 00:09:21.693 { 00:09:21.693 "name": "BaseBdev3", 00:09:21.693 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:21.693 "is_configured": false, 00:09:21.693 "data_offset": 0, 00:09:21.693 "data_size": 0 00:09:21.693 } 00:09:21.693 ] 00:09:21.693 }' 00:09:21.693 13:19:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:21.693 13:19:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.263 13:19:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:22.263 13:19:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.263 13:19:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.263 [2024-11-17 13:19:11.230943] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:22.263 [2024-11-17 13:19:11.230996] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:22.263 [2024-11-17 13:19:11.231008] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:09:22.263 [2024-11-17 13:19:11.231326] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:22.263 [2024-11-17 13:19:11.231534] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:22.263 [2024-11-17 13:19:11.231550] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:22.263 [2024-11-17 13:19:11.231800] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:22.263 BaseBdev3 00:09:22.263 13:19:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.263 13:19:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:22.263 13:19:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:22.263 13:19:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:22.263 13:19:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:22.263 13:19:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:22.263 13:19:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:22.263 13:19:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:22.263 13:19:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.263 13:19:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.263 13:19:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.263 13:19:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:22.263 13:19:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.263 13:19:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.263 [ 00:09:22.263 { 00:09:22.263 "name": "BaseBdev3", 00:09:22.263 "aliases": [ 00:09:22.263 "3e553df8-934a-4a3d-baca-fa73fe034f66" 00:09:22.263 ], 00:09:22.263 "product_name": "Malloc disk", 00:09:22.263 "block_size": 512, 00:09:22.263 "num_blocks": 65536, 00:09:22.263 "uuid": "3e553df8-934a-4a3d-baca-fa73fe034f66", 00:09:22.263 "assigned_rate_limits": { 00:09:22.263 "rw_ios_per_sec": 0, 00:09:22.263 "rw_mbytes_per_sec": 0, 00:09:22.263 "r_mbytes_per_sec": 0, 00:09:22.263 "w_mbytes_per_sec": 0 00:09:22.263 }, 00:09:22.263 "claimed": true, 00:09:22.263 "claim_type": "exclusive_write", 00:09:22.263 "zoned": false, 00:09:22.263 "supported_io_types": { 00:09:22.263 "read": true, 00:09:22.263 "write": true, 00:09:22.263 "unmap": true, 00:09:22.263 "flush": true, 00:09:22.263 "reset": true, 00:09:22.263 "nvme_admin": false, 00:09:22.263 "nvme_io": false, 00:09:22.263 "nvme_io_md": false, 00:09:22.263 "write_zeroes": true, 00:09:22.263 "zcopy": true, 00:09:22.263 "get_zone_info": false, 00:09:22.264 "zone_management": false, 00:09:22.264 "zone_append": false, 00:09:22.264 "compare": false, 00:09:22.264 "compare_and_write": false, 00:09:22.264 "abort": true, 00:09:22.264 "seek_hole": false, 00:09:22.264 "seek_data": false, 00:09:22.264 "copy": true, 00:09:22.264 "nvme_iov_md": false 00:09:22.264 }, 00:09:22.264 "memory_domains": [ 00:09:22.264 { 00:09:22.264 "dma_device_id": "system", 00:09:22.264 "dma_device_type": 1 00:09:22.264 }, 00:09:22.264 { 00:09:22.264 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:22.264 "dma_device_type": 2 00:09:22.264 } 00:09:22.264 ], 00:09:22.264 "driver_specific": {} 00:09:22.264 } 00:09:22.264 ] 00:09:22.264 13:19:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.264 13:19:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:22.264 13:19:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:22.264 13:19:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:22.264 13:19:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:09:22.264 13:19:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:22.264 13:19:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:22.264 13:19:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:22.264 13:19:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:22.264 13:19:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:22.264 13:19:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:22.264 13:19:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:22.264 13:19:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:22.264 13:19:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:22.264 13:19:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:22.264 13:19:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.264 13:19:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.264 13:19:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:22.264 13:19:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.264 13:19:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:22.264 "name": "Existed_Raid", 00:09:22.264 "uuid": "4056be61-6c9e-4763-9a09-f0f2ff36059f", 00:09:22.264 "strip_size_kb": 0, 00:09:22.264 "state": "online", 00:09:22.264 "raid_level": "raid1", 00:09:22.264 "superblock": false, 00:09:22.264 "num_base_bdevs": 3, 00:09:22.264 "num_base_bdevs_discovered": 3, 00:09:22.264 "num_base_bdevs_operational": 3, 00:09:22.264 "base_bdevs_list": [ 00:09:22.264 { 00:09:22.264 "name": "BaseBdev1", 00:09:22.264 "uuid": "c31e1d6b-a95c-4355-a688-9c267a35e5e8", 00:09:22.264 "is_configured": true, 00:09:22.264 "data_offset": 0, 00:09:22.264 "data_size": 65536 00:09:22.264 }, 00:09:22.264 { 00:09:22.264 "name": "BaseBdev2", 00:09:22.264 "uuid": "338c86a2-eede-4c33-b8c7-7b50faaefce7", 00:09:22.264 "is_configured": true, 00:09:22.264 "data_offset": 0, 00:09:22.264 "data_size": 65536 00:09:22.264 }, 00:09:22.264 { 00:09:22.264 "name": "BaseBdev3", 00:09:22.264 "uuid": "3e553df8-934a-4a3d-baca-fa73fe034f66", 00:09:22.264 "is_configured": true, 00:09:22.264 "data_offset": 0, 00:09:22.264 "data_size": 65536 00:09:22.264 } 00:09:22.264 ] 00:09:22.264 }' 00:09:22.264 13:19:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:22.264 13:19:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.524 13:19:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:22.524 13:19:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:22.524 13:19:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:22.524 13:19:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:22.524 13:19:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:22.524 13:19:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:22.524 13:19:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:22.524 13:19:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:22.524 13:19:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.524 13:19:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.524 [2024-11-17 13:19:11.726484] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:22.785 13:19:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.785 13:19:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:22.785 "name": "Existed_Raid", 00:09:22.785 "aliases": [ 00:09:22.785 "4056be61-6c9e-4763-9a09-f0f2ff36059f" 00:09:22.785 ], 00:09:22.785 "product_name": "Raid Volume", 00:09:22.785 "block_size": 512, 00:09:22.785 "num_blocks": 65536, 00:09:22.785 "uuid": "4056be61-6c9e-4763-9a09-f0f2ff36059f", 00:09:22.785 "assigned_rate_limits": { 00:09:22.785 "rw_ios_per_sec": 0, 00:09:22.785 "rw_mbytes_per_sec": 0, 00:09:22.785 "r_mbytes_per_sec": 0, 00:09:22.785 "w_mbytes_per_sec": 0 00:09:22.785 }, 00:09:22.785 "claimed": false, 00:09:22.785 "zoned": false, 00:09:22.785 "supported_io_types": { 00:09:22.785 "read": true, 00:09:22.785 "write": true, 00:09:22.785 "unmap": false, 00:09:22.785 "flush": false, 00:09:22.785 "reset": true, 00:09:22.785 "nvme_admin": false, 00:09:22.785 "nvme_io": false, 00:09:22.785 "nvme_io_md": false, 00:09:22.785 "write_zeroes": true, 00:09:22.785 "zcopy": false, 00:09:22.785 "get_zone_info": false, 00:09:22.785 "zone_management": false, 00:09:22.785 "zone_append": false, 00:09:22.785 "compare": false, 00:09:22.785 "compare_and_write": false, 00:09:22.785 "abort": false, 00:09:22.785 "seek_hole": false, 00:09:22.785 "seek_data": false, 00:09:22.785 "copy": false, 00:09:22.785 "nvme_iov_md": false 00:09:22.785 }, 00:09:22.785 "memory_domains": [ 00:09:22.785 { 00:09:22.785 "dma_device_id": "system", 00:09:22.785 "dma_device_type": 1 00:09:22.785 }, 00:09:22.785 { 00:09:22.785 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:22.785 "dma_device_type": 2 00:09:22.785 }, 00:09:22.785 { 00:09:22.785 "dma_device_id": "system", 00:09:22.785 "dma_device_type": 1 00:09:22.785 }, 00:09:22.785 { 00:09:22.785 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:22.785 "dma_device_type": 2 00:09:22.785 }, 00:09:22.785 { 00:09:22.785 "dma_device_id": "system", 00:09:22.785 "dma_device_type": 1 00:09:22.785 }, 00:09:22.785 { 00:09:22.785 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:22.785 "dma_device_type": 2 00:09:22.785 } 00:09:22.785 ], 00:09:22.785 "driver_specific": { 00:09:22.785 "raid": { 00:09:22.785 "uuid": "4056be61-6c9e-4763-9a09-f0f2ff36059f", 00:09:22.785 "strip_size_kb": 0, 00:09:22.785 "state": "online", 00:09:22.785 "raid_level": "raid1", 00:09:22.785 "superblock": false, 00:09:22.785 "num_base_bdevs": 3, 00:09:22.785 "num_base_bdevs_discovered": 3, 00:09:22.785 "num_base_bdevs_operational": 3, 00:09:22.785 "base_bdevs_list": [ 00:09:22.785 { 00:09:22.785 "name": "BaseBdev1", 00:09:22.785 "uuid": "c31e1d6b-a95c-4355-a688-9c267a35e5e8", 00:09:22.785 "is_configured": true, 00:09:22.785 "data_offset": 0, 00:09:22.785 "data_size": 65536 00:09:22.785 }, 00:09:22.785 { 00:09:22.785 "name": "BaseBdev2", 00:09:22.785 "uuid": "338c86a2-eede-4c33-b8c7-7b50faaefce7", 00:09:22.785 "is_configured": true, 00:09:22.785 "data_offset": 0, 00:09:22.785 "data_size": 65536 00:09:22.785 }, 00:09:22.785 { 00:09:22.785 "name": "BaseBdev3", 00:09:22.785 "uuid": "3e553df8-934a-4a3d-baca-fa73fe034f66", 00:09:22.785 "is_configured": true, 00:09:22.785 "data_offset": 0, 00:09:22.785 "data_size": 65536 00:09:22.785 } 00:09:22.785 ] 00:09:22.785 } 00:09:22.785 } 00:09:22.785 }' 00:09:22.785 13:19:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:22.785 13:19:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:22.785 BaseBdev2 00:09:22.785 BaseBdev3' 00:09:22.785 13:19:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:22.785 13:19:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:22.785 13:19:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:22.785 13:19:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:22.785 13:19:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:22.785 13:19:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.785 13:19:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.785 13:19:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.785 13:19:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:22.785 13:19:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:22.785 13:19:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:22.786 13:19:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:22.786 13:19:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:22.786 13:19:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.786 13:19:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.786 13:19:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.786 13:19:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:22.786 13:19:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:22.786 13:19:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:22.786 13:19:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:22.786 13:19:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:22.786 13:19:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.786 13:19:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.786 13:19:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.786 13:19:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:22.786 13:19:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:22.786 13:19:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:22.786 13:19:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.786 13:19:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.786 [2024-11-17 13:19:11.969775] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:23.046 13:19:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.046 13:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:23.046 13:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:09:23.046 13:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:23.046 13:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:23.046 13:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:09:23.046 13:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:09:23.046 13:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:23.046 13:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:23.046 13:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:23.046 13:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:23.046 13:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:23.046 13:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:23.046 13:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:23.046 13:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:23.046 13:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:23.046 13:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:23.046 13:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:23.046 13:19:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.046 13:19:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.046 13:19:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.046 13:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:23.046 "name": "Existed_Raid", 00:09:23.046 "uuid": "4056be61-6c9e-4763-9a09-f0f2ff36059f", 00:09:23.046 "strip_size_kb": 0, 00:09:23.046 "state": "online", 00:09:23.046 "raid_level": "raid1", 00:09:23.046 "superblock": false, 00:09:23.046 "num_base_bdevs": 3, 00:09:23.046 "num_base_bdevs_discovered": 2, 00:09:23.046 "num_base_bdevs_operational": 2, 00:09:23.046 "base_bdevs_list": [ 00:09:23.046 { 00:09:23.046 "name": null, 00:09:23.046 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:23.046 "is_configured": false, 00:09:23.046 "data_offset": 0, 00:09:23.046 "data_size": 65536 00:09:23.046 }, 00:09:23.046 { 00:09:23.046 "name": "BaseBdev2", 00:09:23.046 "uuid": "338c86a2-eede-4c33-b8c7-7b50faaefce7", 00:09:23.046 "is_configured": true, 00:09:23.046 "data_offset": 0, 00:09:23.046 "data_size": 65536 00:09:23.046 }, 00:09:23.046 { 00:09:23.046 "name": "BaseBdev3", 00:09:23.046 "uuid": "3e553df8-934a-4a3d-baca-fa73fe034f66", 00:09:23.046 "is_configured": true, 00:09:23.046 "data_offset": 0, 00:09:23.046 "data_size": 65536 00:09:23.046 } 00:09:23.046 ] 00:09:23.046 }' 00:09:23.046 13:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:23.046 13:19:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.305 13:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:23.305 13:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:23.305 13:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:23.305 13:19:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.305 13:19:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.305 13:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:23.306 13:19:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.566 13:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:23.566 13:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:23.566 13:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:23.566 13:19:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.566 13:19:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.566 [2024-11-17 13:19:12.542647] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:23.566 13:19:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.566 13:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:23.566 13:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:23.566 13:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:23.566 13:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:23.566 13:19:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.566 13:19:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.566 13:19:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.566 13:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:23.566 13:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:23.566 13:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:23.566 13:19:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.566 13:19:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.566 [2024-11-17 13:19:12.695392] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:23.566 [2024-11-17 13:19:12.695497] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:23.827 [2024-11-17 13:19:12.789140] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:23.827 [2024-11-17 13:19:12.789194] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:23.827 [2024-11-17 13:19:12.789206] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:23.827 13:19:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.827 13:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:23.827 13:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:23.827 13:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:23.827 13:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:23.827 13:19:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.827 13:19:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.827 13:19:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.827 13:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:23.827 13:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:23.827 13:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:23.827 13:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:23.827 13:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:23.827 13:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:23.827 13:19:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.827 13:19:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.827 BaseBdev2 00:09:23.827 13:19:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.827 13:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:23.827 13:19:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:23.827 13:19:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:23.827 13:19:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:23.827 13:19:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:23.827 13:19:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:23.827 13:19:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:23.827 13:19:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.827 13:19:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.827 13:19:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.827 13:19:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:23.827 13:19:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.827 13:19:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.827 [ 00:09:23.827 { 00:09:23.827 "name": "BaseBdev2", 00:09:23.827 "aliases": [ 00:09:23.827 "0302b4d2-36f9-48d9-b7a5-5e086849478e" 00:09:23.827 ], 00:09:23.827 "product_name": "Malloc disk", 00:09:23.827 "block_size": 512, 00:09:23.827 "num_blocks": 65536, 00:09:23.827 "uuid": "0302b4d2-36f9-48d9-b7a5-5e086849478e", 00:09:23.827 "assigned_rate_limits": { 00:09:23.827 "rw_ios_per_sec": 0, 00:09:23.827 "rw_mbytes_per_sec": 0, 00:09:23.827 "r_mbytes_per_sec": 0, 00:09:23.827 "w_mbytes_per_sec": 0 00:09:23.827 }, 00:09:23.827 "claimed": false, 00:09:23.827 "zoned": false, 00:09:23.827 "supported_io_types": { 00:09:23.827 "read": true, 00:09:23.827 "write": true, 00:09:23.827 "unmap": true, 00:09:23.827 "flush": true, 00:09:23.827 "reset": true, 00:09:23.827 "nvme_admin": false, 00:09:23.827 "nvme_io": false, 00:09:23.827 "nvme_io_md": false, 00:09:23.827 "write_zeroes": true, 00:09:23.827 "zcopy": true, 00:09:23.827 "get_zone_info": false, 00:09:23.827 "zone_management": false, 00:09:23.827 "zone_append": false, 00:09:23.827 "compare": false, 00:09:23.827 "compare_and_write": false, 00:09:23.827 "abort": true, 00:09:23.827 "seek_hole": false, 00:09:23.827 "seek_data": false, 00:09:23.827 "copy": true, 00:09:23.827 "nvme_iov_md": false 00:09:23.827 }, 00:09:23.827 "memory_domains": [ 00:09:23.827 { 00:09:23.827 "dma_device_id": "system", 00:09:23.827 "dma_device_type": 1 00:09:23.827 }, 00:09:23.827 { 00:09:23.827 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:23.827 "dma_device_type": 2 00:09:23.827 } 00:09:23.827 ], 00:09:23.827 "driver_specific": {} 00:09:23.827 } 00:09:23.827 ] 00:09:23.827 13:19:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.827 13:19:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:23.827 13:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:23.827 13:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:23.827 13:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:23.827 13:19:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.827 13:19:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.827 BaseBdev3 00:09:23.827 13:19:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.827 13:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:23.827 13:19:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:23.827 13:19:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:23.827 13:19:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:23.827 13:19:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:23.827 13:19:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:23.827 13:19:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:23.827 13:19:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.827 13:19:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.827 13:19:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.827 13:19:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:23.827 13:19:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.827 13:19:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.827 [ 00:09:23.827 { 00:09:23.827 "name": "BaseBdev3", 00:09:23.827 "aliases": [ 00:09:23.827 "5cba8cae-2bee-421a-a98c-d2ba866ea64c" 00:09:23.827 ], 00:09:23.827 "product_name": "Malloc disk", 00:09:23.827 "block_size": 512, 00:09:23.827 "num_blocks": 65536, 00:09:23.827 "uuid": "5cba8cae-2bee-421a-a98c-d2ba866ea64c", 00:09:23.827 "assigned_rate_limits": { 00:09:23.827 "rw_ios_per_sec": 0, 00:09:23.827 "rw_mbytes_per_sec": 0, 00:09:23.827 "r_mbytes_per_sec": 0, 00:09:23.827 "w_mbytes_per_sec": 0 00:09:23.827 }, 00:09:23.827 "claimed": false, 00:09:23.827 "zoned": false, 00:09:23.827 "supported_io_types": { 00:09:23.827 "read": true, 00:09:23.827 "write": true, 00:09:23.827 "unmap": true, 00:09:23.827 "flush": true, 00:09:23.827 "reset": true, 00:09:23.827 "nvme_admin": false, 00:09:23.827 "nvme_io": false, 00:09:23.827 "nvme_io_md": false, 00:09:23.827 "write_zeroes": true, 00:09:23.827 "zcopy": true, 00:09:23.827 "get_zone_info": false, 00:09:23.827 "zone_management": false, 00:09:23.827 "zone_append": false, 00:09:23.827 "compare": false, 00:09:23.827 "compare_and_write": false, 00:09:23.827 "abort": true, 00:09:23.827 "seek_hole": false, 00:09:23.827 "seek_data": false, 00:09:23.827 "copy": true, 00:09:23.827 "nvme_iov_md": false 00:09:23.827 }, 00:09:23.827 "memory_domains": [ 00:09:23.827 { 00:09:23.827 "dma_device_id": "system", 00:09:23.827 "dma_device_type": 1 00:09:23.827 }, 00:09:23.827 { 00:09:23.828 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:23.828 "dma_device_type": 2 00:09:23.828 } 00:09:23.828 ], 00:09:23.828 "driver_specific": {} 00:09:23.828 } 00:09:23.828 ] 00:09:23.828 13:19:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.828 13:19:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:23.828 13:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:23.828 13:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:23.828 13:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:23.828 13:19:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.828 13:19:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.828 [2024-11-17 13:19:13.007108] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:23.828 [2024-11-17 13:19:13.007151] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:23.828 [2024-11-17 13:19:13.007170] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:23.828 [2024-11-17 13:19:13.009083] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:23.828 13:19:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.828 13:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:23.828 13:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:23.828 13:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:23.828 13:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:23.828 13:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:23.828 13:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:23.828 13:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:23.828 13:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:23.828 13:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:23.828 13:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:23.828 13:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:23.828 13:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:23.828 13:19:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.828 13:19:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.828 13:19:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.088 13:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:24.088 "name": "Existed_Raid", 00:09:24.088 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:24.088 "strip_size_kb": 0, 00:09:24.088 "state": "configuring", 00:09:24.088 "raid_level": "raid1", 00:09:24.088 "superblock": false, 00:09:24.088 "num_base_bdevs": 3, 00:09:24.088 "num_base_bdevs_discovered": 2, 00:09:24.088 "num_base_bdevs_operational": 3, 00:09:24.088 "base_bdevs_list": [ 00:09:24.088 { 00:09:24.088 "name": "BaseBdev1", 00:09:24.088 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:24.088 "is_configured": false, 00:09:24.088 "data_offset": 0, 00:09:24.088 "data_size": 0 00:09:24.088 }, 00:09:24.088 { 00:09:24.088 "name": "BaseBdev2", 00:09:24.088 "uuid": "0302b4d2-36f9-48d9-b7a5-5e086849478e", 00:09:24.088 "is_configured": true, 00:09:24.088 "data_offset": 0, 00:09:24.088 "data_size": 65536 00:09:24.088 }, 00:09:24.088 { 00:09:24.088 "name": "BaseBdev3", 00:09:24.088 "uuid": "5cba8cae-2bee-421a-a98c-d2ba866ea64c", 00:09:24.088 "is_configured": true, 00:09:24.088 "data_offset": 0, 00:09:24.088 "data_size": 65536 00:09:24.088 } 00:09:24.088 ] 00:09:24.088 }' 00:09:24.088 13:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:24.088 13:19:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.348 13:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:24.348 13:19:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.348 13:19:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.348 [2024-11-17 13:19:13.458401] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:24.348 13:19:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.348 13:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:24.348 13:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:24.348 13:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:24.348 13:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:24.348 13:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:24.348 13:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:24.348 13:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:24.348 13:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:24.348 13:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:24.348 13:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:24.348 13:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:24.348 13:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:24.348 13:19:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.348 13:19:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.348 13:19:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.348 13:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:24.348 "name": "Existed_Raid", 00:09:24.348 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:24.348 "strip_size_kb": 0, 00:09:24.348 "state": "configuring", 00:09:24.348 "raid_level": "raid1", 00:09:24.348 "superblock": false, 00:09:24.348 "num_base_bdevs": 3, 00:09:24.348 "num_base_bdevs_discovered": 1, 00:09:24.348 "num_base_bdevs_operational": 3, 00:09:24.348 "base_bdevs_list": [ 00:09:24.348 { 00:09:24.348 "name": "BaseBdev1", 00:09:24.348 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:24.348 "is_configured": false, 00:09:24.348 "data_offset": 0, 00:09:24.348 "data_size": 0 00:09:24.348 }, 00:09:24.348 { 00:09:24.348 "name": null, 00:09:24.348 "uuid": "0302b4d2-36f9-48d9-b7a5-5e086849478e", 00:09:24.348 "is_configured": false, 00:09:24.348 "data_offset": 0, 00:09:24.348 "data_size": 65536 00:09:24.348 }, 00:09:24.348 { 00:09:24.348 "name": "BaseBdev3", 00:09:24.348 "uuid": "5cba8cae-2bee-421a-a98c-d2ba866ea64c", 00:09:24.348 "is_configured": true, 00:09:24.348 "data_offset": 0, 00:09:24.348 "data_size": 65536 00:09:24.348 } 00:09:24.348 ] 00:09:24.348 }' 00:09:24.348 13:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:24.348 13:19:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.919 13:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:24.919 13:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:24.919 13:19:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.919 13:19:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.919 13:19:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.919 13:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:24.919 13:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:24.919 13:19:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.919 13:19:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.919 [2024-11-17 13:19:13.950090] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:24.919 BaseBdev1 00:09:24.919 13:19:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.919 13:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:24.919 13:19:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:24.919 13:19:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:24.919 13:19:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:24.919 13:19:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:24.919 13:19:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:24.919 13:19:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:24.919 13:19:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.919 13:19:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.919 13:19:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.919 13:19:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:24.919 13:19:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.919 13:19:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.919 [ 00:09:24.919 { 00:09:24.919 "name": "BaseBdev1", 00:09:24.919 "aliases": [ 00:09:24.919 "c2a13eb0-03a9-409b-97f0-b8af9c3a9d0e" 00:09:24.919 ], 00:09:24.919 "product_name": "Malloc disk", 00:09:24.919 "block_size": 512, 00:09:24.919 "num_blocks": 65536, 00:09:24.919 "uuid": "c2a13eb0-03a9-409b-97f0-b8af9c3a9d0e", 00:09:24.919 "assigned_rate_limits": { 00:09:24.919 "rw_ios_per_sec": 0, 00:09:24.919 "rw_mbytes_per_sec": 0, 00:09:24.919 "r_mbytes_per_sec": 0, 00:09:24.919 "w_mbytes_per_sec": 0 00:09:24.919 }, 00:09:24.919 "claimed": true, 00:09:24.919 "claim_type": "exclusive_write", 00:09:24.919 "zoned": false, 00:09:24.919 "supported_io_types": { 00:09:24.919 "read": true, 00:09:24.919 "write": true, 00:09:24.919 "unmap": true, 00:09:24.919 "flush": true, 00:09:24.919 "reset": true, 00:09:24.919 "nvme_admin": false, 00:09:24.919 "nvme_io": false, 00:09:24.919 "nvme_io_md": false, 00:09:24.919 "write_zeroes": true, 00:09:24.919 "zcopy": true, 00:09:24.919 "get_zone_info": false, 00:09:24.919 "zone_management": false, 00:09:24.919 "zone_append": false, 00:09:24.919 "compare": false, 00:09:24.919 "compare_and_write": false, 00:09:24.919 "abort": true, 00:09:24.919 "seek_hole": false, 00:09:24.919 "seek_data": false, 00:09:24.919 "copy": true, 00:09:24.919 "nvme_iov_md": false 00:09:24.919 }, 00:09:24.919 "memory_domains": [ 00:09:24.919 { 00:09:24.919 "dma_device_id": "system", 00:09:24.919 "dma_device_type": 1 00:09:24.919 }, 00:09:24.919 { 00:09:24.919 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:24.919 "dma_device_type": 2 00:09:24.919 } 00:09:24.919 ], 00:09:24.919 "driver_specific": {} 00:09:24.919 } 00:09:24.919 ] 00:09:24.919 13:19:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.919 13:19:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:24.919 13:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:24.919 13:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:24.919 13:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:24.919 13:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:24.919 13:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:24.919 13:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:24.919 13:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:24.919 13:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:24.919 13:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:24.919 13:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:24.919 13:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:24.919 13:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:24.919 13:19:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.919 13:19:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.919 13:19:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.919 13:19:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:24.919 "name": "Existed_Raid", 00:09:24.919 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:24.919 "strip_size_kb": 0, 00:09:24.919 "state": "configuring", 00:09:24.919 "raid_level": "raid1", 00:09:24.919 "superblock": false, 00:09:24.919 "num_base_bdevs": 3, 00:09:24.919 "num_base_bdevs_discovered": 2, 00:09:24.919 "num_base_bdevs_operational": 3, 00:09:24.919 "base_bdevs_list": [ 00:09:24.919 { 00:09:24.919 "name": "BaseBdev1", 00:09:24.919 "uuid": "c2a13eb0-03a9-409b-97f0-b8af9c3a9d0e", 00:09:24.919 "is_configured": true, 00:09:24.919 "data_offset": 0, 00:09:24.919 "data_size": 65536 00:09:24.919 }, 00:09:24.919 { 00:09:24.919 "name": null, 00:09:24.919 "uuid": "0302b4d2-36f9-48d9-b7a5-5e086849478e", 00:09:24.920 "is_configured": false, 00:09:24.920 "data_offset": 0, 00:09:24.920 "data_size": 65536 00:09:24.920 }, 00:09:24.920 { 00:09:24.920 "name": "BaseBdev3", 00:09:24.920 "uuid": "5cba8cae-2bee-421a-a98c-d2ba866ea64c", 00:09:24.920 "is_configured": true, 00:09:24.920 "data_offset": 0, 00:09:24.920 "data_size": 65536 00:09:24.920 } 00:09:24.920 ] 00:09:24.920 }' 00:09:24.920 13:19:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:24.920 13:19:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.490 13:19:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:25.490 13:19:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.490 13:19:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:25.490 13:19:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.490 13:19:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.490 13:19:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:25.490 13:19:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:25.490 13:19:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.490 13:19:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.490 [2024-11-17 13:19:14.505223] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:25.490 13:19:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.490 13:19:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:25.490 13:19:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:25.490 13:19:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:25.490 13:19:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:25.490 13:19:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:25.490 13:19:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:25.490 13:19:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:25.490 13:19:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:25.490 13:19:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:25.490 13:19:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:25.490 13:19:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:25.490 13:19:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:25.490 13:19:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.490 13:19:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.490 13:19:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.490 13:19:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:25.490 "name": "Existed_Raid", 00:09:25.490 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:25.490 "strip_size_kb": 0, 00:09:25.490 "state": "configuring", 00:09:25.490 "raid_level": "raid1", 00:09:25.490 "superblock": false, 00:09:25.490 "num_base_bdevs": 3, 00:09:25.490 "num_base_bdevs_discovered": 1, 00:09:25.490 "num_base_bdevs_operational": 3, 00:09:25.490 "base_bdevs_list": [ 00:09:25.490 { 00:09:25.490 "name": "BaseBdev1", 00:09:25.490 "uuid": "c2a13eb0-03a9-409b-97f0-b8af9c3a9d0e", 00:09:25.490 "is_configured": true, 00:09:25.490 "data_offset": 0, 00:09:25.490 "data_size": 65536 00:09:25.490 }, 00:09:25.490 { 00:09:25.490 "name": null, 00:09:25.490 "uuid": "0302b4d2-36f9-48d9-b7a5-5e086849478e", 00:09:25.490 "is_configured": false, 00:09:25.490 "data_offset": 0, 00:09:25.490 "data_size": 65536 00:09:25.490 }, 00:09:25.490 { 00:09:25.490 "name": null, 00:09:25.490 "uuid": "5cba8cae-2bee-421a-a98c-d2ba866ea64c", 00:09:25.490 "is_configured": false, 00:09:25.490 "data_offset": 0, 00:09:25.490 "data_size": 65536 00:09:25.490 } 00:09:25.490 ] 00:09:25.490 }' 00:09:25.490 13:19:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:25.490 13:19:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.783 13:19:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:25.783 13:19:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:25.783 13:19:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.783 13:19:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.783 13:19:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.783 13:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:25.783 13:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:25.783 13:19:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.783 13:19:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.043 [2024-11-17 13:19:15.008874] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:26.043 13:19:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.043 13:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:26.043 13:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:26.043 13:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:26.043 13:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:26.043 13:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:26.043 13:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:26.043 13:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:26.043 13:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:26.043 13:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:26.043 13:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:26.043 13:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:26.043 13:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:26.043 13:19:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.043 13:19:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.043 13:19:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.043 13:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:26.043 "name": "Existed_Raid", 00:09:26.043 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:26.043 "strip_size_kb": 0, 00:09:26.043 "state": "configuring", 00:09:26.043 "raid_level": "raid1", 00:09:26.043 "superblock": false, 00:09:26.043 "num_base_bdevs": 3, 00:09:26.043 "num_base_bdevs_discovered": 2, 00:09:26.043 "num_base_bdevs_operational": 3, 00:09:26.043 "base_bdevs_list": [ 00:09:26.043 { 00:09:26.043 "name": "BaseBdev1", 00:09:26.043 "uuid": "c2a13eb0-03a9-409b-97f0-b8af9c3a9d0e", 00:09:26.043 "is_configured": true, 00:09:26.043 "data_offset": 0, 00:09:26.043 "data_size": 65536 00:09:26.043 }, 00:09:26.043 { 00:09:26.043 "name": null, 00:09:26.043 "uuid": "0302b4d2-36f9-48d9-b7a5-5e086849478e", 00:09:26.043 "is_configured": false, 00:09:26.043 "data_offset": 0, 00:09:26.043 "data_size": 65536 00:09:26.043 }, 00:09:26.043 { 00:09:26.043 "name": "BaseBdev3", 00:09:26.043 "uuid": "5cba8cae-2bee-421a-a98c-d2ba866ea64c", 00:09:26.043 "is_configured": true, 00:09:26.043 "data_offset": 0, 00:09:26.043 "data_size": 65536 00:09:26.043 } 00:09:26.043 ] 00:09:26.043 }' 00:09:26.043 13:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:26.043 13:19:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.304 13:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:26.304 13:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:26.304 13:19:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.304 13:19:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.304 13:19:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.304 13:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:26.304 13:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:26.304 13:19:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.304 13:19:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.304 [2024-11-17 13:19:15.520879] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:26.564 13:19:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.564 13:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:26.564 13:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:26.564 13:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:26.564 13:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:26.564 13:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:26.564 13:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:26.564 13:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:26.564 13:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:26.564 13:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:26.564 13:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:26.564 13:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:26.564 13:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:26.564 13:19:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.564 13:19:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.564 13:19:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.564 13:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:26.564 "name": "Existed_Raid", 00:09:26.564 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:26.564 "strip_size_kb": 0, 00:09:26.564 "state": "configuring", 00:09:26.564 "raid_level": "raid1", 00:09:26.564 "superblock": false, 00:09:26.564 "num_base_bdevs": 3, 00:09:26.564 "num_base_bdevs_discovered": 1, 00:09:26.564 "num_base_bdevs_operational": 3, 00:09:26.564 "base_bdevs_list": [ 00:09:26.564 { 00:09:26.564 "name": null, 00:09:26.564 "uuid": "c2a13eb0-03a9-409b-97f0-b8af9c3a9d0e", 00:09:26.564 "is_configured": false, 00:09:26.564 "data_offset": 0, 00:09:26.564 "data_size": 65536 00:09:26.564 }, 00:09:26.564 { 00:09:26.564 "name": null, 00:09:26.564 "uuid": "0302b4d2-36f9-48d9-b7a5-5e086849478e", 00:09:26.564 "is_configured": false, 00:09:26.564 "data_offset": 0, 00:09:26.564 "data_size": 65536 00:09:26.564 }, 00:09:26.564 { 00:09:26.564 "name": "BaseBdev3", 00:09:26.564 "uuid": "5cba8cae-2bee-421a-a98c-d2ba866ea64c", 00:09:26.564 "is_configured": true, 00:09:26.564 "data_offset": 0, 00:09:26.564 "data_size": 65536 00:09:26.564 } 00:09:26.564 ] 00:09:26.564 }' 00:09:26.564 13:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:26.564 13:19:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.824 13:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:27.084 13:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:27.084 13:19:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.084 13:19:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.084 13:19:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.084 13:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:27.084 13:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:27.084 13:19:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.084 13:19:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.084 [2024-11-17 13:19:16.076898] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:27.084 13:19:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.084 13:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:27.084 13:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:27.084 13:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:27.084 13:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:27.084 13:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:27.084 13:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:27.084 13:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:27.084 13:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:27.084 13:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:27.084 13:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:27.084 13:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:27.084 13:19:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.084 13:19:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.084 13:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:27.084 13:19:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.084 13:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:27.084 "name": "Existed_Raid", 00:09:27.084 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:27.084 "strip_size_kb": 0, 00:09:27.084 "state": "configuring", 00:09:27.084 "raid_level": "raid1", 00:09:27.084 "superblock": false, 00:09:27.084 "num_base_bdevs": 3, 00:09:27.084 "num_base_bdevs_discovered": 2, 00:09:27.084 "num_base_bdevs_operational": 3, 00:09:27.084 "base_bdevs_list": [ 00:09:27.084 { 00:09:27.084 "name": null, 00:09:27.084 "uuid": "c2a13eb0-03a9-409b-97f0-b8af9c3a9d0e", 00:09:27.084 "is_configured": false, 00:09:27.084 "data_offset": 0, 00:09:27.084 "data_size": 65536 00:09:27.084 }, 00:09:27.084 { 00:09:27.084 "name": "BaseBdev2", 00:09:27.084 "uuid": "0302b4d2-36f9-48d9-b7a5-5e086849478e", 00:09:27.084 "is_configured": true, 00:09:27.084 "data_offset": 0, 00:09:27.084 "data_size": 65536 00:09:27.084 }, 00:09:27.084 { 00:09:27.084 "name": "BaseBdev3", 00:09:27.084 "uuid": "5cba8cae-2bee-421a-a98c-d2ba866ea64c", 00:09:27.084 "is_configured": true, 00:09:27.084 "data_offset": 0, 00:09:27.084 "data_size": 65536 00:09:27.084 } 00:09:27.084 ] 00:09:27.084 }' 00:09:27.084 13:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:27.084 13:19:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.344 13:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:27.344 13:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:27.344 13:19:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.344 13:19:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.605 13:19:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.605 13:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:27.605 13:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:27.605 13:19:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.605 13:19:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.605 13:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:27.605 13:19:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.605 13:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u c2a13eb0-03a9-409b-97f0-b8af9c3a9d0e 00:09:27.605 13:19:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.605 13:19:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.605 [2024-11-17 13:19:16.686365] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:27.605 [2024-11-17 13:19:16.686431] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:27.605 [2024-11-17 13:19:16.686439] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:09:27.605 [2024-11-17 13:19:16.686687] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:27.605 [2024-11-17 13:19:16.686878] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:27.605 [2024-11-17 13:19:16.686901] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:09:27.605 [2024-11-17 13:19:16.687158] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:27.605 NewBaseBdev 00:09:27.605 13:19:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.605 13:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:27.605 13:19:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:09:27.605 13:19:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:27.605 13:19:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:27.605 13:19:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:27.605 13:19:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:27.605 13:19:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:27.605 13:19:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.605 13:19:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.605 13:19:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.605 13:19:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:27.605 13:19:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.605 13:19:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.605 [ 00:09:27.605 { 00:09:27.605 "name": "NewBaseBdev", 00:09:27.605 "aliases": [ 00:09:27.605 "c2a13eb0-03a9-409b-97f0-b8af9c3a9d0e" 00:09:27.605 ], 00:09:27.605 "product_name": "Malloc disk", 00:09:27.605 "block_size": 512, 00:09:27.605 "num_blocks": 65536, 00:09:27.605 "uuid": "c2a13eb0-03a9-409b-97f0-b8af9c3a9d0e", 00:09:27.605 "assigned_rate_limits": { 00:09:27.605 "rw_ios_per_sec": 0, 00:09:27.605 "rw_mbytes_per_sec": 0, 00:09:27.605 "r_mbytes_per_sec": 0, 00:09:27.605 "w_mbytes_per_sec": 0 00:09:27.605 }, 00:09:27.605 "claimed": true, 00:09:27.605 "claim_type": "exclusive_write", 00:09:27.605 "zoned": false, 00:09:27.605 "supported_io_types": { 00:09:27.605 "read": true, 00:09:27.605 "write": true, 00:09:27.605 "unmap": true, 00:09:27.605 "flush": true, 00:09:27.605 "reset": true, 00:09:27.605 "nvme_admin": false, 00:09:27.605 "nvme_io": false, 00:09:27.605 "nvme_io_md": false, 00:09:27.605 "write_zeroes": true, 00:09:27.605 "zcopy": true, 00:09:27.605 "get_zone_info": false, 00:09:27.605 "zone_management": false, 00:09:27.605 "zone_append": false, 00:09:27.605 "compare": false, 00:09:27.605 "compare_and_write": false, 00:09:27.605 "abort": true, 00:09:27.605 "seek_hole": false, 00:09:27.605 "seek_data": false, 00:09:27.605 "copy": true, 00:09:27.605 "nvme_iov_md": false 00:09:27.605 }, 00:09:27.605 "memory_domains": [ 00:09:27.605 { 00:09:27.605 "dma_device_id": "system", 00:09:27.605 "dma_device_type": 1 00:09:27.605 }, 00:09:27.605 { 00:09:27.605 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:27.605 "dma_device_type": 2 00:09:27.605 } 00:09:27.605 ], 00:09:27.605 "driver_specific": {} 00:09:27.605 } 00:09:27.605 ] 00:09:27.605 13:19:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.605 13:19:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:27.605 13:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:09:27.605 13:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:27.605 13:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:27.605 13:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:27.605 13:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:27.605 13:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:27.605 13:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:27.605 13:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:27.605 13:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:27.605 13:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:27.605 13:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:27.605 13:19:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.605 13:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:27.605 13:19:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.605 13:19:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.605 13:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:27.605 "name": "Existed_Raid", 00:09:27.605 "uuid": "1d29c2ab-3358-42b0-b68b-a253591eaead", 00:09:27.605 "strip_size_kb": 0, 00:09:27.605 "state": "online", 00:09:27.605 "raid_level": "raid1", 00:09:27.605 "superblock": false, 00:09:27.605 "num_base_bdevs": 3, 00:09:27.605 "num_base_bdevs_discovered": 3, 00:09:27.605 "num_base_bdevs_operational": 3, 00:09:27.605 "base_bdevs_list": [ 00:09:27.605 { 00:09:27.605 "name": "NewBaseBdev", 00:09:27.605 "uuid": "c2a13eb0-03a9-409b-97f0-b8af9c3a9d0e", 00:09:27.605 "is_configured": true, 00:09:27.605 "data_offset": 0, 00:09:27.605 "data_size": 65536 00:09:27.605 }, 00:09:27.605 { 00:09:27.605 "name": "BaseBdev2", 00:09:27.605 "uuid": "0302b4d2-36f9-48d9-b7a5-5e086849478e", 00:09:27.605 "is_configured": true, 00:09:27.605 "data_offset": 0, 00:09:27.605 "data_size": 65536 00:09:27.605 }, 00:09:27.605 { 00:09:27.605 "name": "BaseBdev3", 00:09:27.605 "uuid": "5cba8cae-2bee-421a-a98c-d2ba866ea64c", 00:09:27.605 "is_configured": true, 00:09:27.605 "data_offset": 0, 00:09:27.605 "data_size": 65536 00:09:27.605 } 00:09:27.605 ] 00:09:27.605 }' 00:09:27.605 13:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:27.605 13:19:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.175 13:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:28.175 13:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:28.175 13:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:28.175 13:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:28.175 13:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:28.175 13:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:28.175 13:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:28.175 13:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:28.175 13:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.175 13:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.175 [2024-11-17 13:19:17.161899] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:28.175 13:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.175 13:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:28.175 "name": "Existed_Raid", 00:09:28.175 "aliases": [ 00:09:28.175 "1d29c2ab-3358-42b0-b68b-a253591eaead" 00:09:28.175 ], 00:09:28.175 "product_name": "Raid Volume", 00:09:28.175 "block_size": 512, 00:09:28.175 "num_blocks": 65536, 00:09:28.175 "uuid": "1d29c2ab-3358-42b0-b68b-a253591eaead", 00:09:28.175 "assigned_rate_limits": { 00:09:28.175 "rw_ios_per_sec": 0, 00:09:28.175 "rw_mbytes_per_sec": 0, 00:09:28.175 "r_mbytes_per_sec": 0, 00:09:28.175 "w_mbytes_per_sec": 0 00:09:28.175 }, 00:09:28.175 "claimed": false, 00:09:28.175 "zoned": false, 00:09:28.175 "supported_io_types": { 00:09:28.175 "read": true, 00:09:28.175 "write": true, 00:09:28.175 "unmap": false, 00:09:28.175 "flush": false, 00:09:28.175 "reset": true, 00:09:28.175 "nvme_admin": false, 00:09:28.175 "nvme_io": false, 00:09:28.175 "nvme_io_md": false, 00:09:28.175 "write_zeroes": true, 00:09:28.175 "zcopy": false, 00:09:28.175 "get_zone_info": false, 00:09:28.175 "zone_management": false, 00:09:28.175 "zone_append": false, 00:09:28.175 "compare": false, 00:09:28.175 "compare_and_write": false, 00:09:28.175 "abort": false, 00:09:28.175 "seek_hole": false, 00:09:28.175 "seek_data": false, 00:09:28.175 "copy": false, 00:09:28.175 "nvme_iov_md": false 00:09:28.175 }, 00:09:28.175 "memory_domains": [ 00:09:28.175 { 00:09:28.175 "dma_device_id": "system", 00:09:28.175 "dma_device_type": 1 00:09:28.176 }, 00:09:28.176 { 00:09:28.176 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:28.176 "dma_device_type": 2 00:09:28.176 }, 00:09:28.176 { 00:09:28.176 "dma_device_id": "system", 00:09:28.176 "dma_device_type": 1 00:09:28.176 }, 00:09:28.176 { 00:09:28.176 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:28.176 "dma_device_type": 2 00:09:28.176 }, 00:09:28.176 { 00:09:28.176 "dma_device_id": "system", 00:09:28.176 "dma_device_type": 1 00:09:28.176 }, 00:09:28.176 { 00:09:28.176 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:28.176 "dma_device_type": 2 00:09:28.176 } 00:09:28.176 ], 00:09:28.176 "driver_specific": { 00:09:28.176 "raid": { 00:09:28.176 "uuid": "1d29c2ab-3358-42b0-b68b-a253591eaead", 00:09:28.176 "strip_size_kb": 0, 00:09:28.176 "state": "online", 00:09:28.176 "raid_level": "raid1", 00:09:28.176 "superblock": false, 00:09:28.176 "num_base_bdevs": 3, 00:09:28.176 "num_base_bdevs_discovered": 3, 00:09:28.176 "num_base_bdevs_operational": 3, 00:09:28.176 "base_bdevs_list": [ 00:09:28.176 { 00:09:28.176 "name": "NewBaseBdev", 00:09:28.176 "uuid": "c2a13eb0-03a9-409b-97f0-b8af9c3a9d0e", 00:09:28.176 "is_configured": true, 00:09:28.176 "data_offset": 0, 00:09:28.176 "data_size": 65536 00:09:28.176 }, 00:09:28.176 { 00:09:28.176 "name": "BaseBdev2", 00:09:28.176 "uuid": "0302b4d2-36f9-48d9-b7a5-5e086849478e", 00:09:28.176 "is_configured": true, 00:09:28.176 "data_offset": 0, 00:09:28.176 "data_size": 65536 00:09:28.176 }, 00:09:28.176 { 00:09:28.176 "name": "BaseBdev3", 00:09:28.176 "uuid": "5cba8cae-2bee-421a-a98c-d2ba866ea64c", 00:09:28.176 "is_configured": true, 00:09:28.176 "data_offset": 0, 00:09:28.176 "data_size": 65536 00:09:28.176 } 00:09:28.176 ] 00:09:28.176 } 00:09:28.176 } 00:09:28.176 }' 00:09:28.176 13:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:28.176 13:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:28.176 BaseBdev2 00:09:28.176 BaseBdev3' 00:09:28.176 13:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:28.176 13:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:28.176 13:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:28.176 13:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:28.176 13:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:28.176 13:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.176 13:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.176 13:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.176 13:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:28.176 13:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:28.176 13:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:28.176 13:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:28.176 13:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.176 13:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.176 13:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:28.176 13:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.176 13:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:28.176 13:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:28.176 13:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:28.176 13:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:28.176 13:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:28.176 13:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.176 13:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.436 13:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.436 13:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:28.436 13:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:28.436 13:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:28.436 13:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.436 13:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.436 [2024-11-17 13:19:17.421182] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:28.436 [2024-11-17 13:19:17.421230] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:28.436 [2024-11-17 13:19:17.421301] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:28.436 [2024-11-17 13:19:17.421638] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:28.436 [2024-11-17 13:19:17.421660] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:09:28.436 13:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.436 13:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 67321 00:09:28.436 13:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 67321 ']' 00:09:28.436 13:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 67321 00:09:28.436 13:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:09:28.436 13:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:28.436 13:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67321 00:09:28.436 killing process with pid 67321 00:09:28.436 13:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:28.436 13:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:28.436 13:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67321' 00:09:28.436 13:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 67321 00:09:28.436 [2024-11-17 13:19:17.465551] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:28.436 13:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 67321 00:09:28.696 [2024-11-17 13:19:17.766033] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:30.077 ************************************ 00:09:30.077 END TEST raid_state_function_test 00:09:30.077 ************************************ 00:09:30.077 13:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:09:30.077 00:09:30.077 real 0m10.519s 00:09:30.077 user 0m16.740s 00:09:30.077 sys 0m1.891s 00:09:30.077 13:19:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:30.077 13:19:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.077 13:19:18 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:09:30.077 13:19:18 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:30.077 13:19:18 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:30.077 13:19:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:30.077 ************************************ 00:09:30.077 START TEST raid_state_function_test_sb 00:09:30.077 ************************************ 00:09:30.077 13:19:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 3 true 00:09:30.077 13:19:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:09:30.077 13:19:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:30.077 13:19:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:09:30.077 13:19:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:30.077 13:19:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:30.077 13:19:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:30.077 13:19:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:30.077 13:19:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:30.077 13:19:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:30.077 13:19:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:30.077 13:19:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:30.077 13:19:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:30.077 13:19:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:30.077 13:19:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:30.077 13:19:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:30.077 13:19:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:30.077 13:19:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:30.077 13:19:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:30.077 13:19:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:30.077 13:19:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:30.077 13:19:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:30.077 13:19:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:09:30.077 13:19:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:09:30.077 13:19:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:09:30.077 13:19:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:09:30.077 13:19:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=67942 00:09:30.077 13:19:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:30.077 13:19:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 67942' 00:09:30.077 Process raid pid: 67942 00:09:30.077 13:19:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 67942 00:09:30.077 13:19:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 67942 ']' 00:09:30.077 13:19:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:30.077 13:19:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:30.077 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:30.077 13:19:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:30.077 13:19:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:30.077 13:19:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.077 [2024-11-17 13:19:19.031281] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:09:30.077 [2024-11-17 13:19:19.031389] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:30.077 [2024-11-17 13:19:19.203665] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:30.337 [2024-11-17 13:19:19.317121] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:30.337 [2024-11-17 13:19:19.521381] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:30.337 [2024-11-17 13:19:19.521423] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:30.906 13:19:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:30.906 13:19:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:09:30.906 13:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:30.906 13:19:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.906 13:19:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.906 [2024-11-17 13:19:19.866124] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:30.906 [2024-11-17 13:19:19.866176] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:30.906 [2024-11-17 13:19:19.866187] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:30.906 [2024-11-17 13:19:19.866196] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:30.906 [2024-11-17 13:19:19.866203] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:30.906 [2024-11-17 13:19:19.866222] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:30.906 13:19:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.906 13:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:30.906 13:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:30.906 13:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:30.906 13:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:30.906 13:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:30.906 13:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:30.906 13:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:30.906 13:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:30.906 13:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:30.906 13:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:30.906 13:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:30.906 13:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:30.906 13:19:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.906 13:19:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.906 13:19:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.906 13:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:30.906 "name": "Existed_Raid", 00:09:30.906 "uuid": "2f4b973d-2b1e-470b-93cb-5cf480a2d0d6", 00:09:30.906 "strip_size_kb": 0, 00:09:30.906 "state": "configuring", 00:09:30.906 "raid_level": "raid1", 00:09:30.906 "superblock": true, 00:09:30.906 "num_base_bdevs": 3, 00:09:30.906 "num_base_bdevs_discovered": 0, 00:09:30.906 "num_base_bdevs_operational": 3, 00:09:30.906 "base_bdevs_list": [ 00:09:30.906 { 00:09:30.906 "name": "BaseBdev1", 00:09:30.906 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:30.906 "is_configured": false, 00:09:30.906 "data_offset": 0, 00:09:30.906 "data_size": 0 00:09:30.906 }, 00:09:30.906 { 00:09:30.906 "name": "BaseBdev2", 00:09:30.906 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:30.906 "is_configured": false, 00:09:30.906 "data_offset": 0, 00:09:30.906 "data_size": 0 00:09:30.906 }, 00:09:30.906 { 00:09:30.906 "name": "BaseBdev3", 00:09:30.906 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:30.906 "is_configured": false, 00:09:30.906 "data_offset": 0, 00:09:30.906 "data_size": 0 00:09:30.906 } 00:09:30.906 ] 00:09:30.906 }' 00:09:30.906 13:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:30.906 13:19:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.166 13:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:31.166 13:19:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.166 13:19:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.166 [2024-11-17 13:19:20.309411] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:31.166 [2024-11-17 13:19:20.309480] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:31.166 13:19:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.166 13:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:31.166 13:19:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.166 13:19:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.166 [2024-11-17 13:19:20.321358] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:31.166 [2024-11-17 13:19:20.321415] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:31.166 [2024-11-17 13:19:20.321427] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:31.166 [2024-11-17 13:19:20.321440] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:31.166 [2024-11-17 13:19:20.321448] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:31.166 [2024-11-17 13:19:20.321461] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:31.166 13:19:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.166 13:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:31.166 13:19:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.166 13:19:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.166 [2024-11-17 13:19:20.378377] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:31.167 BaseBdev1 00:09:31.167 13:19:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.167 13:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:31.167 13:19:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:31.167 13:19:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:31.167 13:19:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:31.167 13:19:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:31.167 13:19:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:31.167 13:19:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:31.167 13:19:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.167 13:19:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.427 13:19:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.427 13:19:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:31.427 13:19:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.427 13:19:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.427 [ 00:09:31.427 { 00:09:31.427 "name": "BaseBdev1", 00:09:31.427 "aliases": [ 00:09:31.427 "c60e4617-2908-4e8e-a503-7f1c038bb6e2" 00:09:31.427 ], 00:09:31.427 "product_name": "Malloc disk", 00:09:31.427 "block_size": 512, 00:09:31.427 "num_blocks": 65536, 00:09:31.427 "uuid": "c60e4617-2908-4e8e-a503-7f1c038bb6e2", 00:09:31.427 "assigned_rate_limits": { 00:09:31.427 "rw_ios_per_sec": 0, 00:09:31.427 "rw_mbytes_per_sec": 0, 00:09:31.427 "r_mbytes_per_sec": 0, 00:09:31.427 "w_mbytes_per_sec": 0 00:09:31.427 }, 00:09:31.427 "claimed": true, 00:09:31.427 "claim_type": "exclusive_write", 00:09:31.427 "zoned": false, 00:09:31.427 "supported_io_types": { 00:09:31.427 "read": true, 00:09:31.427 "write": true, 00:09:31.427 "unmap": true, 00:09:31.427 "flush": true, 00:09:31.427 "reset": true, 00:09:31.427 "nvme_admin": false, 00:09:31.427 "nvme_io": false, 00:09:31.427 "nvme_io_md": false, 00:09:31.427 "write_zeroes": true, 00:09:31.427 "zcopy": true, 00:09:31.427 "get_zone_info": false, 00:09:31.427 "zone_management": false, 00:09:31.427 "zone_append": false, 00:09:31.427 "compare": false, 00:09:31.427 "compare_and_write": false, 00:09:31.427 "abort": true, 00:09:31.427 "seek_hole": false, 00:09:31.427 "seek_data": false, 00:09:31.427 "copy": true, 00:09:31.427 "nvme_iov_md": false 00:09:31.427 }, 00:09:31.427 "memory_domains": [ 00:09:31.427 { 00:09:31.427 "dma_device_id": "system", 00:09:31.427 "dma_device_type": 1 00:09:31.427 }, 00:09:31.427 { 00:09:31.427 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:31.427 "dma_device_type": 2 00:09:31.427 } 00:09:31.427 ], 00:09:31.427 "driver_specific": {} 00:09:31.427 } 00:09:31.427 ] 00:09:31.427 13:19:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.428 13:19:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:31.428 13:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:31.428 13:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:31.428 13:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:31.428 13:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:31.428 13:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:31.428 13:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:31.428 13:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:31.428 13:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:31.428 13:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:31.428 13:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:31.428 13:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:31.428 13:19:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.428 13:19:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.428 13:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:31.428 13:19:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.428 13:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:31.428 "name": "Existed_Raid", 00:09:31.428 "uuid": "f68a700d-c944-4807-958f-4b13c878d509", 00:09:31.428 "strip_size_kb": 0, 00:09:31.428 "state": "configuring", 00:09:31.428 "raid_level": "raid1", 00:09:31.428 "superblock": true, 00:09:31.428 "num_base_bdevs": 3, 00:09:31.428 "num_base_bdevs_discovered": 1, 00:09:31.428 "num_base_bdevs_operational": 3, 00:09:31.428 "base_bdevs_list": [ 00:09:31.428 { 00:09:31.428 "name": "BaseBdev1", 00:09:31.428 "uuid": "c60e4617-2908-4e8e-a503-7f1c038bb6e2", 00:09:31.428 "is_configured": true, 00:09:31.428 "data_offset": 2048, 00:09:31.428 "data_size": 63488 00:09:31.428 }, 00:09:31.428 { 00:09:31.428 "name": "BaseBdev2", 00:09:31.428 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:31.428 "is_configured": false, 00:09:31.428 "data_offset": 0, 00:09:31.428 "data_size": 0 00:09:31.428 }, 00:09:31.428 { 00:09:31.428 "name": "BaseBdev3", 00:09:31.428 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:31.428 "is_configured": false, 00:09:31.428 "data_offset": 0, 00:09:31.428 "data_size": 0 00:09:31.428 } 00:09:31.428 ] 00:09:31.428 }' 00:09:31.428 13:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:31.428 13:19:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.687 13:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:31.687 13:19:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.687 13:19:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.687 [2024-11-17 13:19:20.845657] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:31.687 [2024-11-17 13:19:20.845738] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:31.687 13:19:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.687 13:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:31.687 13:19:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.687 13:19:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.687 [2024-11-17 13:19:20.857695] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:31.687 [2024-11-17 13:19:20.859902] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:31.687 [2024-11-17 13:19:20.859970] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:31.687 [2024-11-17 13:19:20.859982] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:31.687 [2024-11-17 13:19:20.859994] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:31.687 13:19:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.687 13:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:31.687 13:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:31.688 13:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:31.688 13:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:31.688 13:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:31.688 13:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:31.688 13:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:31.688 13:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:31.688 13:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:31.688 13:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:31.688 13:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:31.688 13:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:31.688 13:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:31.688 13:19:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.688 13:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:31.688 13:19:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.688 13:19:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.947 13:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:31.947 "name": "Existed_Raid", 00:09:31.947 "uuid": "eb7f07e9-adc6-489a-a877-8c17bdf4c8db", 00:09:31.947 "strip_size_kb": 0, 00:09:31.947 "state": "configuring", 00:09:31.947 "raid_level": "raid1", 00:09:31.947 "superblock": true, 00:09:31.947 "num_base_bdevs": 3, 00:09:31.947 "num_base_bdevs_discovered": 1, 00:09:31.947 "num_base_bdevs_operational": 3, 00:09:31.947 "base_bdevs_list": [ 00:09:31.947 { 00:09:31.947 "name": "BaseBdev1", 00:09:31.947 "uuid": "c60e4617-2908-4e8e-a503-7f1c038bb6e2", 00:09:31.947 "is_configured": true, 00:09:31.947 "data_offset": 2048, 00:09:31.947 "data_size": 63488 00:09:31.947 }, 00:09:31.947 { 00:09:31.947 "name": "BaseBdev2", 00:09:31.947 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:31.947 "is_configured": false, 00:09:31.947 "data_offset": 0, 00:09:31.947 "data_size": 0 00:09:31.947 }, 00:09:31.947 { 00:09:31.947 "name": "BaseBdev3", 00:09:31.947 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:31.947 "is_configured": false, 00:09:31.947 "data_offset": 0, 00:09:31.947 "data_size": 0 00:09:31.947 } 00:09:31.947 ] 00:09:31.947 }' 00:09:31.948 13:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:31.948 13:19:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.208 13:19:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:32.208 13:19:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.208 13:19:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.208 [2024-11-17 13:19:21.354158] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:32.208 BaseBdev2 00:09:32.208 13:19:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.208 13:19:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:32.208 13:19:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:32.208 13:19:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:32.208 13:19:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:32.208 13:19:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:32.208 13:19:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:32.208 13:19:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:32.208 13:19:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.208 13:19:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.208 13:19:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.208 13:19:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:32.208 13:19:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.208 13:19:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.208 [ 00:09:32.208 { 00:09:32.208 "name": "BaseBdev2", 00:09:32.208 "aliases": [ 00:09:32.208 "3aa13b84-a251-4cf4-9642-a268e4ff7962" 00:09:32.208 ], 00:09:32.208 "product_name": "Malloc disk", 00:09:32.208 "block_size": 512, 00:09:32.208 "num_blocks": 65536, 00:09:32.208 "uuid": "3aa13b84-a251-4cf4-9642-a268e4ff7962", 00:09:32.208 "assigned_rate_limits": { 00:09:32.208 "rw_ios_per_sec": 0, 00:09:32.208 "rw_mbytes_per_sec": 0, 00:09:32.208 "r_mbytes_per_sec": 0, 00:09:32.208 "w_mbytes_per_sec": 0 00:09:32.208 }, 00:09:32.208 "claimed": true, 00:09:32.208 "claim_type": "exclusive_write", 00:09:32.208 "zoned": false, 00:09:32.208 "supported_io_types": { 00:09:32.208 "read": true, 00:09:32.208 "write": true, 00:09:32.208 "unmap": true, 00:09:32.208 "flush": true, 00:09:32.208 "reset": true, 00:09:32.208 "nvme_admin": false, 00:09:32.208 "nvme_io": false, 00:09:32.208 "nvme_io_md": false, 00:09:32.208 "write_zeroes": true, 00:09:32.208 "zcopy": true, 00:09:32.208 "get_zone_info": false, 00:09:32.208 "zone_management": false, 00:09:32.208 "zone_append": false, 00:09:32.208 "compare": false, 00:09:32.208 "compare_and_write": false, 00:09:32.208 "abort": true, 00:09:32.208 "seek_hole": false, 00:09:32.208 "seek_data": false, 00:09:32.208 "copy": true, 00:09:32.208 "nvme_iov_md": false 00:09:32.208 }, 00:09:32.208 "memory_domains": [ 00:09:32.208 { 00:09:32.208 "dma_device_id": "system", 00:09:32.208 "dma_device_type": 1 00:09:32.208 }, 00:09:32.208 { 00:09:32.208 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:32.208 "dma_device_type": 2 00:09:32.208 } 00:09:32.208 ], 00:09:32.208 "driver_specific": {} 00:09:32.208 } 00:09:32.208 ] 00:09:32.208 13:19:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.208 13:19:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:32.208 13:19:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:32.208 13:19:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:32.208 13:19:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:32.208 13:19:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:32.208 13:19:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:32.208 13:19:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:32.208 13:19:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:32.208 13:19:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:32.208 13:19:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:32.208 13:19:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:32.208 13:19:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:32.208 13:19:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:32.208 13:19:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:32.208 13:19:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:32.208 13:19:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.208 13:19:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.208 13:19:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.468 13:19:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:32.468 "name": "Existed_Raid", 00:09:32.468 "uuid": "eb7f07e9-adc6-489a-a877-8c17bdf4c8db", 00:09:32.468 "strip_size_kb": 0, 00:09:32.468 "state": "configuring", 00:09:32.468 "raid_level": "raid1", 00:09:32.468 "superblock": true, 00:09:32.468 "num_base_bdevs": 3, 00:09:32.468 "num_base_bdevs_discovered": 2, 00:09:32.468 "num_base_bdevs_operational": 3, 00:09:32.468 "base_bdevs_list": [ 00:09:32.468 { 00:09:32.468 "name": "BaseBdev1", 00:09:32.468 "uuid": "c60e4617-2908-4e8e-a503-7f1c038bb6e2", 00:09:32.468 "is_configured": true, 00:09:32.468 "data_offset": 2048, 00:09:32.468 "data_size": 63488 00:09:32.468 }, 00:09:32.468 { 00:09:32.468 "name": "BaseBdev2", 00:09:32.468 "uuid": "3aa13b84-a251-4cf4-9642-a268e4ff7962", 00:09:32.468 "is_configured": true, 00:09:32.469 "data_offset": 2048, 00:09:32.469 "data_size": 63488 00:09:32.469 }, 00:09:32.469 { 00:09:32.469 "name": "BaseBdev3", 00:09:32.469 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:32.469 "is_configured": false, 00:09:32.469 "data_offset": 0, 00:09:32.469 "data_size": 0 00:09:32.469 } 00:09:32.469 ] 00:09:32.469 }' 00:09:32.469 13:19:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:32.469 13:19:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.729 13:19:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:32.729 13:19:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.729 13:19:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.729 [2024-11-17 13:19:21.850853] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:32.729 [2024-11-17 13:19:21.851173] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:32.729 [2024-11-17 13:19:21.851240] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:32.729 [2024-11-17 13:19:21.851636] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:32.729 BaseBdev3 00:09:32.729 [2024-11-17 13:19:21.851850] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:32.729 [2024-11-17 13:19:21.851862] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:32.729 [2024-11-17 13:19:21.852054] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:32.729 13:19:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.729 13:19:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:32.729 13:19:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:32.729 13:19:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:32.729 13:19:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:32.729 13:19:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:32.729 13:19:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:32.729 13:19:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:32.729 13:19:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.729 13:19:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.729 13:19:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.729 13:19:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:32.729 13:19:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.729 13:19:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.729 [ 00:09:32.729 { 00:09:32.729 "name": "BaseBdev3", 00:09:32.729 "aliases": [ 00:09:32.729 "05e50818-cdd8-4b13-ad35-2229de5bcff8" 00:09:32.729 ], 00:09:32.729 "product_name": "Malloc disk", 00:09:32.729 "block_size": 512, 00:09:32.729 "num_blocks": 65536, 00:09:32.729 "uuid": "05e50818-cdd8-4b13-ad35-2229de5bcff8", 00:09:32.729 "assigned_rate_limits": { 00:09:32.729 "rw_ios_per_sec": 0, 00:09:32.729 "rw_mbytes_per_sec": 0, 00:09:32.729 "r_mbytes_per_sec": 0, 00:09:32.729 "w_mbytes_per_sec": 0 00:09:32.729 }, 00:09:32.729 "claimed": true, 00:09:32.729 "claim_type": "exclusive_write", 00:09:32.729 "zoned": false, 00:09:32.729 "supported_io_types": { 00:09:32.729 "read": true, 00:09:32.729 "write": true, 00:09:32.729 "unmap": true, 00:09:32.729 "flush": true, 00:09:32.729 "reset": true, 00:09:32.729 "nvme_admin": false, 00:09:32.729 "nvme_io": false, 00:09:32.729 "nvme_io_md": false, 00:09:32.729 "write_zeroes": true, 00:09:32.729 "zcopy": true, 00:09:32.729 "get_zone_info": false, 00:09:32.729 "zone_management": false, 00:09:32.729 "zone_append": false, 00:09:32.729 "compare": false, 00:09:32.729 "compare_and_write": false, 00:09:32.729 "abort": true, 00:09:32.729 "seek_hole": false, 00:09:32.729 "seek_data": false, 00:09:32.729 "copy": true, 00:09:32.729 "nvme_iov_md": false 00:09:32.729 }, 00:09:32.729 "memory_domains": [ 00:09:32.729 { 00:09:32.729 "dma_device_id": "system", 00:09:32.729 "dma_device_type": 1 00:09:32.729 }, 00:09:32.729 { 00:09:32.729 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:32.729 "dma_device_type": 2 00:09:32.729 } 00:09:32.729 ], 00:09:32.729 "driver_specific": {} 00:09:32.729 } 00:09:32.729 ] 00:09:32.729 13:19:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.729 13:19:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:32.729 13:19:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:32.729 13:19:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:32.729 13:19:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:09:32.729 13:19:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:32.729 13:19:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:32.729 13:19:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:32.729 13:19:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:32.729 13:19:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:32.729 13:19:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:32.729 13:19:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:32.729 13:19:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:32.729 13:19:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:32.729 13:19:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:32.729 13:19:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:32.729 13:19:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.729 13:19:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.729 13:19:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.729 13:19:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:32.729 "name": "Existed_Raid", 00:09:32.729 "uuid": "eb7f07e9-adc6-489a-a877-8c17bdf4c8db", 00:09:32.729 "strip_size_kb": 0, 00:09:32.729 "state": "online", 00:09:32.729 "raid_level": "raid1", 00:09:32.729 "superblock": true, 00:09:32.729 "num_base_bdevs": 3, 00:09:32.729 "num_base_bdevs_discovered": 3, 00:09:32.729 "num_base_bdevs_operational": 3, 00:09:32.729 "base_bdevs_list": [ 00:09:32.729 { 00:09:32.729 "name": "BaseBdev1", 00:09:32.729 "uuid": "c60e4617-2908-4e8e-a503-7f1c038bb6e2", 00:09:32.729 "is_configured": true, 00:09:32.729 "data_offset": 2048, 00:09:32.729 "data_size": 63488 00:09:32.729 }, 00:09:32.729 { 00:09:32.729 "name": "BaseBdev2", 00:09:32.729 "uuid": "3aa13b84-a251-4cf4-9642-a268e4ff7962", 00:09:32.729 "is_configured": true, 00:09:32.729 "data_offset": 2048, 00:09:32.729 "data_size": 63488 00:09:32.729 }, 00:09:32.729 { 00:09:32.729 "name": "BaseBdev3", 00:09:32.729 "uuid": "05e50818-cdd8-4b13-ad35-2229de5bcff8", 00:09:32.729 "is_configured": true, 00:09:32.729 "data_offset": 2048, 00:09:32.729 "data_size": 63488 00:09:32.729 } 00:09:32.729 ] 00:09:32.729 }' 00:09:32.729 13:19:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:32.729 13:19:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.299 13:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:33.299 13:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:33.300 13:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:33.300 13:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:33.300 13:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:33.300 13:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:33.300 13:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:33.300 13:19:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.300 13:19:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.300 13:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:33.300 [2024-11-17 13:19:22.326555] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:33.300 13:19:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.300 13:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:33.300 "name": "Existed_Raid", 00:09:33.300 "aliases": [ 00:09:33.300 "eb7f07e9-adc6-489a-a877-8c17bdf4c8db" 00:09:33.300 ], 00:09:33.300 "product_name": "Raid Volume", 00:09:33.300 "block_size": 512, 00:09:33.300 "num_blocks": 63488, 00:09:33.300 "uuid": "eb7f07e9-adc6-489a-a877-8c17bdf4c8db", 00:09:33.300 "assigned_rate_limits": { 00:09:33.300 "rw_ios_per_sec": 0, 00:09:33.300 "rw_mbytes_per_sec": 0, 00:09:33.300 "r_mbytes_per_sec": 0, 00:09:33.300 "w_mbytes_per_sec": 0 00:09:33.300 }, 00:09:33.300 "claimed": false, 00:09:33.300 "zoned": false, 00:09:33.300 "supported_io_types": { 00:09:33.300 "read": true, 00:09:33.300 "write": true, 00:09:33.300 "unmap": false, 00:09:33.300 "flush": false, 00:09:33.300 "reset": true, 00:09:33.300 "nvme_admin": false, 00:09:33.300 "nvme_io": false, 00:09:33.300 "nvme_io_md": false, 00:09:33.300 "write_zeroes": true, 00:09:33.300 "zcopy": false, 00:09:33.300 "get_zone_info": false, 00:09:33.300 "zone_management": false, 00:09:33.300 "zone_append": false, 00:09:33.300 "compare": false, 00:09:33.300 "compare_and_write": false, 00:09:33.300 "abort": false, 00:09:33.300 "seek_hole": false, 00:09:33.300 "seek_data": false, 00:09:33.300 "copy": false, 00:09:33.300 "nvme_iov_md": false 00:09:33.300 }, 00:09:33.300 "memory_domains": [ 00:09:33.300 { 00:09:33.300 "dma_device_id": "system", 00:09:33.300 "dma_device_type": 1 00:09:33.300 }, 00:09:33.300 { 00:09:33.300 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:33.300 "dma_device_type": 2 00:09:33.300 }, 00:09:33.300 { 00:09:33.300 "dma_device_id": "system", 00:09:33.300 "dma_device_type": 1 00:09:33.300 }, 00:09:33.300 { 00:09:33.300 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:33.300 "dma_device_type": 2 00:09:33.300 }, 00:09:33.300 { 00:09:33.300 "dma_device_id": "system", 00:09:33.300 "dma_device_type": 1 00:09:33.300 }, 00:09:33.300 { 00:09:33.300 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:33.300 "dma_device_type": 2 00:09:33.300 } 00:09:33.300 ], 00:09:33.300 "driver_specific": { 00:09:33.300 "raid": { 00:09:33.300 "uuid": "eb7f07e9-adc6-489a-a877-8c17bdf4c8db", 00:09:33.300 "strip_size_kb": 0, 00:09:33.300 "state": "online", 00:09:33.300 "raid_level": "raid1", 00:09:33.300 "superblock": true, 00:09:33.300 "num_base_bdevs": 3, 00:09:33.300 "num_base_bdevs_discovered": 3, 00:09:33.300 "num_base_bdevs_operational": 3, 00:09:33.300 "base_bdevs_list": [ 00:09:33.300 { 00:09:33.300 "name": "BaseBdev1", 00:09:33.300 "uuid": "c60e4617-2908-4e8e-a503-7f1c038bb6e2", 00:09:33.300 "is_configured": true, 00:09:33.300 "data_offset": 2048, 00:09:33.300 "data_size": 63488 00:09:33.300 }, 00:09:33.300 { 00:09:33.300 "name": "BaseBdev2", 00:09:33.300 "uuid": "3aa13b84-a251-4cf4-9642-a268e4ff7962", 00:09:33.300 "is_configured": true, 00:09:33.300 "data_offset": 2048, 00:09:33.300 "data_size": 63488 00:09:33.300 }, 00:09:33.300 { 00:09:33.300 "name": "BaseBdev3", 00:09:33.300 "uuid": "05e50818-cdd8-4b13-ad35-2229de5bcff8", 00:09:33.300 "is_configured": true, 00:09:33.300 "data_offset": 2048, 00:09:33.300 "data_size": 63488 00:09:33.300 } 00:09:33.300 ] 00:09:33.300 } 00:09:33.300 } 00:09:33.300 }' 00:09:33.300 13:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:33.300 13:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:33.300 BaseBdev2 00:09:33.300 BaseBdev3' 00:09:33.300 13:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:33.300 13:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:33.300 13:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:33.300 13:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:33.300 13:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:33.300 13:19:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.300 13:19:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.300 13:19:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.300 13:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:33.300 13:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:33.300 13:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:33.300 13:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:33.300 13:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:33.300 13:19:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.300 13:19:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.300 13:19:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.561 13:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:33.561 13:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:33.561 13:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:33.561 13:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:33.561 13:19:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.561 13:19:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.561 13:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:33.561 13:19:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.561 13:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:33.561 13:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:33.561 13:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:33.561 13:19:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.561 13:19:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.561 [2024-11-17 13:19:22.577753] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:33.561 13:19:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.561 13:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:33.561 13:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:09:33.561 13:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:33.561 13:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:09:33.561 13:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:09:33.561 13:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:09:33.561 13:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:33.561 13:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:33.561 13:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:33.561 13:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:33.561 13:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:33.561 13:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:33.561 13:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:33.561 13:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:33.561 13:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:33.561 13:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:33.561 13:19:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.561 13:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:33.561 13:19:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.561 13:19:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.561 13:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:33.561 "name": "Existed_Raid", 00:09:33.561 "uuid": "eb7f07e9-adc6-489a-a877-8c17bdf4c8db", 00:09:33.561 "strip_size_kb": 0, 00:09:33.561 "state": "online", 00:09:33.561 "raid_level": "raid1", 00:09:33.561 "superblock": true, 00:09:33.561 "num_base_bdevs": 3, 00:09:33.561 "num_base_bdevs_discovered": 2, 00:09:33.561 "num_base_bdevs_operational": 2, 00:09:33.561 "base_bdevs_list": [ 00:09:33.561 { 00:09:33.561 "name": null, 00:09:33.561 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:33.561 "is_configured": false, 00:09:33.561 "data_offset": 0, 00:09:33.561 "data_size": 63488 00:09:33.561 }, 00:09:33.561 { 00:09:33.561 "name": "BaseBdev2", 00:09:33.561 "uuid": "3aa13b84-a251-4cf4-9642-a268e4ff7962", 00:09:33.561 "is_configured": true, 00:09:33.561 "data_offset": 2048, 00:09:33.561 "data_size": 63488 00:09:33.561 }, 00:09:33.561 { 00:09:33.561 "name": "BaseBdev3", 00:09:33.561 "uuid": "05e50818-cdd8-4b13-ad35-2229de5bcff8", 00:09:33.561 "is_configured": true, 00:09:33.561 "data_offset": 2048, 00:09:33.561 "data_size": 63488 00:09:33.561 } 00:09:33.561 ] 00:09:33.561 }' 00:09:33.561 13:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:33.561 13:19:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.130 13:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:34.130 13:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:34.130 13:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:34.130 13:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:34.130 13:19:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.130 13:19:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.130 13:19:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.130 13:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:34.130 13:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:34.130 13:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:34.130 13:19:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.130 13:19:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.130 [2024-11-17 13:19:23.220056] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:34.130 13:19:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.130 13:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:34.130 13:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:34.130 13:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:34.130 13:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:34.130 13:19:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.130 13:19:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.130 13:19:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.391 13:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:34.391 13:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:34.391 13:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:34.391 13:19:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.391 13:19:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.391 [2024-11-17 13:19:23.383117] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:34.391 [2024-11-17 13:19:23.383287] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:34.391 [2024-11-17 13:19:23.496130] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:34.391 [2024-11-17 13:19:23.496229] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:34.391 [2024-11-17 13:19:23.496247] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:34.391 13:19:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.391 13:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:34.391 13:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:34.391 13:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:34.391 13:19:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.391 13:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:34.391 13:19:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.391 13:19:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.391 13:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:34.391 13:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:34.391 13:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:34.391 13:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:34.391 13:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:34.391 13:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:34.391 13:19:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.391 13:19:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.391 BaseBdev2 00:09:34.391 13:19:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.391 13:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:34.391 13:19:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:34.391 13:19:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:34.391 13:19:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:34.391 13:19:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:34.391 13:19:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:34.391 13:19:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:34.391 13:19:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.391 13:19:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.651 13:19:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.651 13:19:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:34.651 13:19:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.651 13:19:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.651 [ 00:09:34.651 { 00:09:34.651 "name": "BaseBdev2", 00:09:34.651 "aliases": [ 00:09:34.651 "d3c5047c-fbcc-48c9-a14a-68538c46e366" 00:09:34.651 ], 00:09:34.651 "product_name": "Malloc disk", 00:09:34.651 "block_size": 512, 00:09:34.651 "num_blocks": 65536, 00:09:34.651 "uuid": "d3c5047c-fbcc-48c9-a14a-68538c46e366", 00:09:34.651 "assigned_rate_limits": { 00:09:34.651 "rw_ios_per_sec": 0, 00:09:34.651 "rw_mbytes_per_sec": 0, 00:09:34.651 "r_mbytes_per_sec": 0, 00:09:34.651 "w_mbytes_per_sec": 0 00:09:34.651 }, 00:09:34.651 "claimed": false, 00:09:34.651 "zoned": false, 00:09:34.651 "supported_io_types": { 00:09:34.651 "read": true, 00:09:34.651 "write": true, 00:09:34.652 "unmap": true, 00:09:34.652 "flush": true, 00:09:34.652 "reset": true, 00:09:34.652 "nvme_admin": false, 00:09:34.652 "nvme_io": false, 00:09:34.652 "nvme_io_md": false, 00:09:34.652 "write_zeroes": true, 00:09:34.652 "zcopy": true, 00:09:34.652 "get_zone_info": false, 00:09:34.652 "zone_management": false, 00:09:34.652 "zone_append": false, 00:09:34.652 "compare": false, 00:09:34.652 "compare_and_write": false, 00:09:34.652 "abort": true, 00:09:34.652 "seek_hole": false, 00:09:34.652 "seek_data": false, 00:09:34.652 "copy": true, 00:09:34.652 "nvme_iov_md": false 00:09:34.652 }, 00:09:34.652 "memory_domains": [ 00:09:34.652 { 00:09:34.652 "dma_device_id": "system", 00:09:34.652 "dma_device_type": 1 00:09:34.652 }, 00:09:34.652 { 00:09:34.652 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:34.652 "dma_device_type": 2 00:09:34.652 } 00:09:34.652 ], 00:09:34.652 "driver_specific": {} 00:09:34.652 } 00:09:34.652 ] 00:09:34.652 13:19:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.652 13:19:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:34.652 13:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:34.652 13:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:34.652 13:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:34.652 13:19:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.652 13:19:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.652 BaseBdev3 00:09:34.652 13:19:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.652 13:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:34.652 13:19:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:34.652 13:19:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:34.652 13:19:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:34.652 13:19:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:34.652 13:19:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:34.652 13:19:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:34.652 13:19:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.652 13:19:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.652 13:19:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.652 13:19:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:34.652 13:19:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.652 13:19:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.652 [ 00:09:34.652 { 00:09:34.652 "name": "BaseBdev3", 00:09:34.652 "aliases": [ 00:09:34.652 "02213ed4-fe8e-47a6-afdb-602e4a6853a5" 00:09:34.652 ], 00:09:34.652 "product_name": "Malloc disk", 00:09:34.652 "block_size": 512, 00:09:34.652 "num_blocks": 65536, 00:09:34.652 "uuid": "02213ed4-fe8e-47a6-afdb-602e4a6853a5", 00:09:34.652 "assigned_rate_limits": { 00:09:34.652 "rw_ios_per_sec": 0, 00:09:34.652 "rw_mbytes_per_sec": 0, 00:09:34.652 "r_mbytes_per_sec": 0, 00:09:34.652 "w_mbytes_per_sec": 0 00:09:34.652 }, 00:09:34.652 "claimed": false, 00:09:34.652 "zoned": false, 00:09:34.652 "supported_io_types": { 00:09:34.652 "read": true, 00:09:34.652 "write": true, 00:09:34.652 "unmap": true, 00:09:34.652 "flush": true, 00:09:34.652 "reset": true, 00:09:34.652 "nvme_admin": false, 00:09:34.652 "nvme_io": false, 00:09:34.652 "nvme_io_md": false, 00:09:34.652 "write_zeroes": true, 00:09:34.652 "zcopy": true, 00:09:34.652 "get_zone_info": false, 00:09:34.652 "zone_management": false, 00:09:34.652 "zone_append": false, 00:09:34.652 "compare": false, 00:09:34.652 "compare_and_write": false, 00:09:34.652 "abort": true, 00:09:34.652 "seek_hole": false, 00:09:34.652 "seek_data": false, 00:09:34.652 "copy": true, 00:09:34.652 "nvme_iov_md": false 00:09:34.652 }, 00:09:34.652 "memory_domains": [ 00:09:34.652 { 00:09:34.652 "dma_device_id": "system", 00:09:34.652 "dma_device_type": 1 00:09:34.652 }, 00:09:34.652 { 00:09:34.652 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:34.652 "dma_device_type": 2 00:09:34.652 } 00:09:34.652 ], 00:09:34.652 "driver_specific": {} 00:09:34.652 } 00:09:34.652 ] 00:09:34.652 13:19:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.652 13:19:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:34.652 13:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:34.652 13:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:34.652 13:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:34.652 13:19:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.652 13:19:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.652 [2024-11-17 13:19:23.736396] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:34.652 [2024-11-17 13:19:23.736545] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:34.652 [2024-11-17 13:19:23.736595] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:34.652 [2024-11-17 13:19:23.738980] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:34.652 13:19:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.652 13:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:34.652 13:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:34.652 13:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:34.652 13:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:34.652 13:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:34.652 13:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:34.652 13:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:34.652 13:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:34.652 13:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:34.652 13:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:34.652 13:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:34.652 13:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:34.652 13:19:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.652 13:19:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.652 13:19:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.652 13:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:34.652 "name": "Existed_Raid", 00:09:34.652 "uuid": "1b9290b1-41c3-41f4-bd53-196a7ffa962a", 00:09:34.652 "strip_size_kb": 0, 00:09:34.652 "state": "configuring", 00:09:34.652 "raid_level": "raid1", 00:09:34.652 "superblock": true, 00:09:34.652 "num_base_bdevs": 3, 00:09:34.652 "num_base_bdevs_discovered": 2, 00:09:34.652 "num_base_bdevs_operational": 3, 00:09:34.652 "base_bdevs_list": [ 00:09:34.652 { 00:09:34.652 "name": "BaseBdev1", 00:09:34.652 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:34.652 "is_configured": false, 00:09:34.653 "data_offset": 0, 00:09:34.653 "data_size": 0 00:09:34.653 }, 00:09:34.653 { 00:09:34.653 "name": "BaseBdev2", 00:09:34.653 "uuid": "d3c5047c-fbcc-48c9-a14a-68538c46e366", 00:09:34.653 "is_configured": true, 00:09:34.653 "data_offset": 2048, 00:09:34.653 "data_size": 63488 00:09:34.653 }, 00:09:34.653 { 00:09:34.653 "name": "BaseBdev3", 00:09:34.653 "uuid": "02213ed4-fe8e-47a6-afdb-602e4a6853a5", 00:09:34.653 "is_configured": true, 00:09:34.653 "data_offset": 2048, 00:09:34.653 "data_size": 63488 00:09:34.653 } 00:09:34.653 ] 00:09:34.653 }' 00:09:34.653 13:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:34.653 13:19:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.221 13:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:35.221 13:19:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.221 13:19:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.221 [2024-11-17 13:19:24.223615] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:35.221 13:19:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.221 13:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:35.221 13:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:35.221 13:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:35.221 13:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:35.221 13:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:35.221 13:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:35.221 13:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:35.221 13:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:35.221 13:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:35.221 13:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:35.221 13:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:35.221 13:19:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.221 13:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:35.221 13:19:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.221 13:19:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.221 13:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:35.221 "name": "Existed_Raid", 00:09:35.221 "uuid": "1b9290b1-41c3-41f4-bd53-196a7ffa962a", 00:09:35.221 "strip_size_kb": 0, 00:09:35.221 "state": "configuring", 00:09:35.221 "raid_level": "raid1", 00:09:35.221 "superblock": true, 00:09:35.221 "num_base_bdevs": 3, 00:09:35.221 "num_base_bdevs_discovered": 1, 00:09:35.221 "num_base_bdevs_operational": 3, 00:09:35.221 "base_bdevs_list": [ 00:09:35.221 { 00:09:35.221 "name": "BaseBdev1", 00:09:35.221 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:35.221 "is_configured": false, 00:09:35.221 "data_offset": 0, 00:09:35.221 "data_size": 0 00:09:35.221 }, 00:09:35.221 { 00:09:35.221 "name": null, 00:09:35.221 "uuid": "d3c5047c-fbcc-48c9-a14a-68538c46e366", 00:09:35.221 "is_configured": false, 00:09:35.221 "data_offset": 0, 00:09:35.221 "data_size": 63488 00:09:35.221 }, 00:09:35.221 { 00:09:35.221 "name": "BaseBdev3", 00:09:35.221 "uuid": "02213ed4-fe8e-47a6-afdb-602e4a6853a5", 00:09:35.221 "is_configured": true, 00:09:35.221 "data_offset": 2048, 00:09:35.221 "data_size": 63488 00:09:35.221 } 00:09:35.221 ] 00:09:35.221 }' 00:09:35.221 13:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:35.221 13:19:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.480 13:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:35.480 13:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:35.480 13:19:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.480 13:19:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.480 13:19:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.740 13:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:35.740 13:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:35.740 13:19:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.740 13:19:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.740 [2024-11-17 13:19:24.751617] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:35.740 BaseBdev1 00:09:35.740 13:19:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.740 13:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:35.740 13:19:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:35.740 13:19:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:35.740 13:19:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:35.740 13:19:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:35.740 13:19:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:35.740 13:19:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:35.740 13:19:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.740 13:19:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.740 13:19:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.740 13:19:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:35.740 13:19:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.740 13:19:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.740 [ 00:09:35.740 { 00:09:35.740 "name": "BaseBdev1", 00:09:35.740 "aliases": [ 00:09:35.740 "87211761-c3b3-4bbc-a637-2a510f8ffaa5" 00:09:35.740 ], 00:09:35.740 "product_name": "Malloc disk", 00:09:35.740 "block_size": 512, 00:09:35.740 "num_blocks": 65536, 00:09:35.740 "uuid": "87211761-c3b3-4bbc-a637-2a510f8ffaa5", 00:09:35.741 "assigned_rate_limits": { 00:09:35.741 "rw_ios_per_sec": 0, 00:09:35.741 "rw_mbytes_per_sec": 0, 00:09:35.741 "r_mbytes_per_sec": 0, 00:09:35.741 "w_mbytes_per_sec": 0 00:09:35.741 }, 00:09:35.741 "claimed": true, 00:09:35.741 "claim_type": "exclusive_write", 00:09:35.741 "zoned": false, 00:09:35.741 "supported_io_types": { 00:09:35.741 "read": true, 00:09:35.741 "write": true, 00:09:35.741 "unmap": true, 00:09:35.741 "flush": true, 00:09:35.741 "reset": true, 00:09:35.741 "nvme_admin": false, 00:09:35.741 "nvme_io": false, 00:09:35.741 "nvme_io_md": false, 00:09:35.741 "write_zeroes": true, 00:09:35.741 "zcopy": true, 00:09:35.741 "get_zone_info": false, 00:09:35.741 "zone_management": false, 00:09:35.741 "zone_append": false, 00:09:35.741 "compare": false, 00:09:35.741 "compare_and_write": false, 00:09:35.741 "abort": true, 00:09:35.741 "seek_hole": false, 00:09:35.741 "seek_data": false, 00:09:35.741 "copy": true, 00:09:35.741 "nvme_iov_md": false 00:09:35.741 }, 00:09:35.741 "memory_domains": [ 00:09:35.741 { 00:09:35.741 "dma_device_id": "system", 00:09:35.741 "dma_device_type": 1 00:09:35.741 }, 00:09:35.741 { 00:09:35.741 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:35.741 "dma_device_type": 2 00:09:35.741 } 00:09:35.741 ], 00:09:35.741 "driver_specific": {} 00:09:35.741 } 00:09:35.741 ] 00:09:35.741 13:19:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.741 13:19:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:35.741 13:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:35.741 13:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:35.741 13:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:35.741 13:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:35.741 13:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:35.741 13:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:35.741 13:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:35.741 13:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:35.741 13:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:35.741 13:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:35.741 13:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:35.741 13:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:35.741 13:19:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.741 13:19:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.741 13:19:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.741 13:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:35.741 "name": "Existed_Raid", 00:09:35.741 "uuid": "1b9290b1-41c3-41f4-bd53-196a7ffa962a", 00:09:35.741 "strip_size_kb": 0, 00:09:35.741 "state": "configuring", 00:09:35.741 "raid_level": "raid1", 00:09:35.741 "superblock": true, 00:09:35.741 "num_base_bdevs": 3, 00:09:35.741 "num_base_bdevs_discovered": 2, 00:09:35.741 "num_base_bdevs_operational": 3, 00:09:35.741 "base_bdevs_list": [ 00:09:35.741 { 00:09:35.741 "name": "BaseBdev1", 00:09:35.741 "uuid": "87211761-c3b3-4bbc-a637-2a510f8ffaa5", 00:09:35.741 "is_configured": true, 00:09:35.741 "data_offset": 2048, 00:09:35.741 "data_size": 63488 00:09:35.741 }, 00:09:35.741 { 00:09:35.741 "name": null, 00:09:35.741 "uuid": "d3c5047c-fbcc-48c9-a14a-68538c46e366", 00:09:35.741 "is_configured": false, 00:09:35.741 "data_offset": 0, 00:09:35.741 "data_size": 63488 00:09:35.741 }, 00:09:35.741 { 00:09:35.741 "name": "BaseBdev3", 00:09:35.741 "uuid": "02213ed4-fe8e-47a6-afdb-602e4a6853a5", 00:09:35.741 "is_configured": true, 00:09:35.741 "data_offset": 2048, 00:09:35.741 "data_size": 63488 00:09:35.741 } 00:09:35.741 ] 00:09:35.741 }' 00:09:35.741 13:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:35.741 13:19:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.310 13:19:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.310 13:19:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:36.310 13:19:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.310 13:19:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.310 13:19:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.310 13:19:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:36.310 13:19:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:36.310 13:19:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.310 13:19:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.310 [2024-11-17 13:19:25.278762] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:36.310 13:19:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.310 13:19:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:36.310 13:19:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:36.310 13:19:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:36.310 13:19:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:36.310 13:19:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:36.310 13:19:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:36.310 13:19:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:36.310 13:19:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:36.310 13:19:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:36.311 13:19:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:36.311 13:19:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.311 13:19:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.311 13:19:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.311 13:19:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:36.311 13:19:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.311 13:19:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:36.311 "name": "Existed_Raid", 00:09:36.311 "uuid": "1b9290b1-41c3-41f4-bd53-196a7ffa962a", 00:09:36.311 "strip_size_kb": 0, 00:09:36.311 "state": "configuring", 00:09:36.311 "raid_level": "raid1", 00:09:36.311 "superblock": true, 00:09:36.311 "num_base_bdevs": 3, 00:09:36.311 "num_base_bdevs_discovered": 1, 00:09:36.311 "num_base_bdevs_operational": 3, 00:09:36.311 "base_bdevs_list": [ 00:09:36.311 { 00:09:36.311 "name": "BaseBdev1", 00:09:36.311 "uuid": "87211761-c3b3-4bbc-a637-2a510f8ffaa5", 00:09:36.311 "is_configured": true, 00:09:36.311 "data_offset": 2048, 00:09:36.311 "data_size": 63488 00:09:36.311 }, 00:09:36.311 { 00:09:36.311 "name": null, 00:09:36.311 "uuid": "d3c5047c-fbcc-48c9-a14a-68538c46e366", 00:09:36.311 "is_configured": false, 00:09:36.311 "data_offset": 0, 00:09:36.311 "data_size": 63488 00:09:36.311 }, 00:09:36.311 { 00:09:36.311 "name": null, 00:09:36.311 "uuid": "02213ed4-fe8e-47a6-afdb-602e4a6853a5", 00:09:36.311 "is_configured": false, 00:09:36.311 "data_offset": 0, 00:09:36.311 "data_size": 63488 00:09:36.311 } 00:09:36.311 ] 00:09:36.311 }' 00:09:36.311 13:19:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:36.311 13:19:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.570 13:19:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.570 13:19:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:36.570 13:19:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.570 13:19:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.571 13:19:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.571 13:19:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:36.571 13:19:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:36.571 13:19:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.571 13:19:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.571 [2024-11-17 13:19:25.766071] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:36.571 13:19:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.571 13:19:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:36.571 13:19:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:36.571 13:19:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:36.571 13:19:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:36.571 13:19:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:36.571 13:19:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:36.571 13:19:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:36.571 13:19:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:36.571 13:19:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:36.571 13:19:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:36.571 13:19:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.571 13:19:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:36.571 13:19:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.571 13:19:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.830 13:19:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.830 13:19:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:36.830 "name": "Existed_Raid", 00:09:36.830 "uuid": "1b9290b1-41c3-41f4-bd53-196a7ffa962a", 00:09:36.830 "strip_size_kb": 0, 00:09:36.830 "state": "configuring", 00:09:36.830 "raid_level": "raid1", 00:09:36.830 "superblock": true, 00:09:36.830 "num_base_bdevs": 3, 00:09:36.830 "num_base_bdevs_discovered": 2, 00:09:36.830 "num_base_bdevs_operational": 3, 00:09:36.830 "base_bdevs_list": [ 00:09:36.830 { 00:09:36.830 "name": "BaseBdev1", 00:09:36.830 "uuid": "87211761-c3b3-4bbc-a637-2a510f8ffaa5", 00:09:36.830 "is_configured": true, 00:09:36.830 "data_offset": 2048, 00:09:36.830 "data_size": 63488 00:09:36.830 }, 00:09:36.830 { 00:09:36.830 "name": null, 00:09:36.830 "uuid": "d3c5047c-fbcc-48c9-a14a-68538c46e366", 00:09:36.830 "is_configured": false, 00:09:36.830 "data_offset": 0, 00:09:36.830 "data_size": 63488 00:09:36.830 }, 00:09:36.830 { 00:09:36.830 "name": "BaseBdev3", 00:09:36.830 "uuid": "02213ed4-fe8e-47a6-afdb-602e4a6853a5", 00:09:36.830 "is_configured": true, 00:09:36.830 "data_offset": 2048, 00:09:36.830 "data_size": 63488 00:09:36.830 } 00:09:36.830 ] 00:09:36.830 }' 00:09:36.830 13:19:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:36.830 13:19:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.089 13:19:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:37.089 13:19:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.089 13:19:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.089 13:19:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:37.089 13:19:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.089 13:19:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:37.089 13:19:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:37.089 13:19:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.089 13:19:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.089 [2024-11-17 13:19:26.261159] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:37.349 13:19:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.349 13:19:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:37.349 13:19:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:37.349 13:19:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:37.349 13:19:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:37.349 13:19:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:37.349 13:19:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:37.349 13:19:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:37.349 13:19:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:37.349 13:19:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:37.349 13:19:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:37.349 13:19:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:37.349 13:19:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:37.349 13:19:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.349 13:19:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.349 13:19:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.349 13:19:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:37.349 "name": "Existed_Raid", 00:09:37.349 "uuid": "1b9290b1-41c3-41f4-bd53-196a7ffa962a", 00:09:37.349 "strip_size_kb": 0, 00:09:37.349 "state": "configuring", 00:09:37.349 "raid_level": "raid1", 00:09:37.349 "superblock": true, 00:09:37.349 "num_base_bdevs": 3, 00:09:37.349 "num_base_bdevs_discovered": 1, 00:09:37.349 "num_base_bdevs_operational": 3, 00:09:37.349 "base_bdevs_list": [ 00:09:37.349 { 00:09:37.349 "name": null, 00:09:37.349 "uuid": "87211761-c3b3-4bbc-a637-2a510f8ffaa5", 00:09:37.349 "is_configured": false, 00:09:37.349 "data_offset": 0, 00:09:37.349 "data_size": 63488 00:09:37.349 }, 00:09:37.349 { 00:09:37.349 "name": null, 00:09:37.349 "uuid": "d3c5047c-fbcc-48c9-a14a-68538c46e366", 00:09:37.349 "is_configured": false, 00:09:37.349 "data_offset": 0, 00:09:37.349 "data_size": 63488 00:09:37.349 }, 00:09:37.349 { 00:09:37.349 "name": "BaseBdev3", 00:09:37.349 "uuid": "02213ed4-fe8e-47a6-afdb-602e4a6853a5", 00:09:37.349 "is_configured": true, 00:09:37.349 "data_offset": 2048, 00:09:37.349 "data_size": 63488 00:09:37.349 } 00:09:37.349 ] 00:09:37.349 }' 00:09:37.349 13:19:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:37.349 13:19:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.919 13:19:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:37.919 13:19:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:37.919 13:19:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.919 13:19:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.919 13:19:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.919 13:19:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:37.919 13:19:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:37.919 13:19:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.919 13:19:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.919 [2024-11-17 13:19:26.916856] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:37.919 13:19:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.919 13:19:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:37.919 13:19:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:37.919 13:19:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:37.919 13:19:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:37.919 13:19:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:37.919 13:19:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:37.919 13:19:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:37.919 13:19:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:37.919 13:19:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:37.919 13:19:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:37.919 13:19:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:37.919 13:19:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:37.919 13:19:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.919 13:19:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.919 13:19:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.919 13:19:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:37.919 "name": "Existed_Raid", 00:09:37.919 "uuid": "1b9290b1-41c3-41f4-bd53-196a7ffa962a", 00:09:37.919 "strip_size_kb": 0, 00:09:37.919 "state": "configuring", 00:09:37.919 "raid_level": "raid1", 00:09:37.919 "superblock": true, 00:09:37.919 "num_base_bdevs": 3, 00:09:37.919 "num_base_bdevs_discovered": 2, 00:09:37.919 "num_base_bdevs_operational": 3, 00:09:37.919 "base_bdevs_list": [ 00:09:37.919 { 00:09:37.919 "name": null, 00:09:37.919 "uuid": "87211761-c3b3-4bbc-a637-2a510f8ffaa5", 00:09:37.919 "is_configured": false, 00:09:37.919 "data_offset": 0, 00:09:37.919 "data_size": 63488 00:09:37.919 }, 00:09:37.919 { 00:09:37.919 "name": "BaseBdev2", 00:09:37.919 "uuid": "d3c5047c-fbcc-48c9-a14a-68538c46e366", 00:09:37.919 "is_configured": true, 00:09:37.919 "data_offset": 2048, 00:09:37.919 "data_size": 63488 00:09:37.919 }, 00:09:37.919 { 00:09:37.919 "name": "BaseBdev3", 00:09:37.919 "uuid": "02213ed4-fe8e-47a6-afdb-602e4a6853a5", 00:09:37.919 "is_configured": true, 00:09:37.919 "data_offset": 2048, 00:09:37.919 "data_size": 63488 00:09:37.919 } 00:09:37.919 ] 00:09:37.919 }' 00:09:37.919 13:19:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:37.919 13:19:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.179 13:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:38.179 13:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:38.179 13:19:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.179 13:19:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.179 13:19:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.179 13:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:38.179 13:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:38.179 13:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:38.179 13:19:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.179 13:19:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.439 13:19:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.439 13:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 87211761-c3b3-4bbc-a637-2a510f8ffaa5 00:09:38.439 13:19:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.439 13:19:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.439 [2024-11-17 13:19:27.485327] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:38.439 [2024-11-17 13:19:27.485629] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:38.439 [2024-11-17 13:19:27.485644] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:38.439 [2024-11-17 13:19:27.485959] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:38.439 NewBaseBdev 00:09:38.439 [2024-11-17 13:19:27.486168] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:38.439 [2024-11-17 13:19:27.486194] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:09:38.439 [2024-11-17 13:19:27.486426] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:38.439 13:19:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.439 13:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:38.439 13:19:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:09:38.439 13:19:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:38.439 13:19:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:38.439 13:19:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:38.439 13:19:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:38.439 13:19:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:38.439 13:19:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.439 13:19:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.439 13:19:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.439 13:19:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:38.439 13:19:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.439 13:19:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.439 [ 00:09:38.439 { 00:09:38.439 "name": "NewBaseBdev", 00:09:38.439 "aliases": [ 00:09:38.439 "87211761-c3b3-4bbc-a637-2a510f8ffaa5" 00:09:38.439 ], 00:09:38.439 "product_name": "Malloc disk", 00:09:38.439 "block_size": 512, 00:09:38.439 "num_blocks": 65536, 00:09:38.439 "uuid": "87211761-c3b3-4bbc-a637-2a510f8ffaa5", 00:09:38.439 "assigned_rate_limits": { 00:09:38.439 "rw_ios_per_sec": 0, 00:09:38.439 "rw_mbytes_per_sec": 0, 00:09:38.439 "r_mbytes_per_sec": 0, 00:09:38.439 "w_mbytes_per_sec": 0 00:09:38.439 }, 00:09:38.439 "claimed": true, 00:09:38.439 "claim_type": "exclusive_write", 00:09:38.439 "zoned": false, 00:09:38.439 "supported_io_types": { 00:09:38.439 "read": true, 00:09:38.439 "write": true, 00:09:38.439 "unmap": true, 00:09:38.439 "flush": true, 00:09:38.439 "reset": true, 00:09:38.439 "nvme_admin": false, 00:09:38.439 "nvme_io": false, 00:09:38.439 "nvme_io_md": false, 00:09:38.439 "write_zeroes": true, 00:09:38.439 "zcopy": true, 00:09:38.439 "get_zone_info": false, 00:09:38.439 "zone_management": false, 00:09:38.439 "zone_append": false, 00:09:38.439 "compare": false, 00:09:38.439 "compare_and_write": false, 00:09:38.439 "abort": true, 00:09:38.439 "seek_hole": false, 00:09:38.439 "seek_data": false, 00:09:38.439 "copy": true, 00:09:38.439 "nvme_iov_md": false 00:09:38.439 }, 00:09:38.439 "memory_domains": [ 00:09:38.439 { 00:09:38.439 "dma_device_id": "system", 00:09:38.439 "dma_device_type": 1 00:09:38.439 }, 00:09:38.439 { 00:09:38.439 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:38.439 "dma_device_type": 2 00:09:38.439 } 00:09:38.439 ], 00:09:38.439 "driver_specific": {} 00:09:38.439 } 00:09:38.439 ] 00:09:38.439 13:19:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.439 13:19:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:38.439 13:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:09:38.439 13:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:38.439 13:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:38.439 13:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:38.440 13:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:38.440 13:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:38.440 13:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:38.440 13:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:38.440 13:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:38.440 13:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:38.440 13:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:38.440 13:19:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.440 13:19:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.440 13:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:38.440 13:19:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.440 13:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:38.440 "name": "Existed_Raid", 00:09:38.440 "uuid": "1b9290b1-41c3-41f4-bd53-196a7ffa962a", 00:09:38.440 "strip_size_kb": 0, 00:09:38.440 "state": "online", 00:09:38.440 "raid_level": "raid1", 00:09:38.440 "superblock": true, 00:09:38.440 "num_base_bdevs": 3, 00:09:38.440 "num_base_bdevs_discovered": 3, 00:09:38.440 "num_base_bdevs_operational": 3, 00:09:38.440 "base_bdevs_list": [ 00:09:38.440 { 00:09:38.440 "name": "NewBaseBdev", 00:09:38.440 "uuid": "87211761-c3b3-4bbc-a637-2a510f8ffaa5", 00:09:38.440 "is_configured": true, 00:09:38.440 "data_offset": 2048, 00:09:38.440 "data_size": 63488 00:09:38.440 }, 00:09:38.440 { 00:09:38.440 "name": "BaseBdev2", 00:09:38.440 "uuid": "d3c5047c-fbcc-48c9-a14a-68538c46e366", 00:09:38.440 "is_configured": true, 00:09:38.440 "data_offset": 2048, 00:09:38.440 "data_size": 63488 00:09:38.440 }, 00:09:38.440 { 00:09:38.440 "name": "BaseBdev3", 00:09:38.440 "uuid": "02213ed4-fe8e-47a6-afdb-602e4a6853a5", 00:09:38.440 "is_configured": true, 00:09:38.440 "data_offset": 2048, 00:09:38.440 "data_size": 63488 00:09:38.440 } 00:09:38.440 ] 00:09:38.440 }' 00:09:38.440 13:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:38.440 13:19:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.024 13:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:39.024 13:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:39.024 13:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:39.024 13:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:39.024 13:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:39.024 13:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:39.024 13:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:39.024 13:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:39.024 13:19:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.024 13:19:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.024 [2024-11-17 13:19:27.977152] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:39.024 13:19:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.024 13:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:39.024 "name": "Existed_Raid", 00:09:39.024 "aliases": [ 00:09:39.024 "1b9290b1-41c3-41f4-bd53-196a7ffa962a" 00:09:39.024 ], 00:09:39.024 "product_name": "Raid Volume", 00:09:39.024 "block_size": 512, 00:09:39.024 "num_blocks": 63488, 00:09:39.024 "uuid": "1b9290b1-41c3-41f4-bd53-196a7ffa962a", 00:09:39.024 "assigned_rate_limits": { 00:09:39.024 "rw_ios_per_sec": 0, 00:09:39.024 "rw_mbytes_per_sec": 0, 00:09:39.024 "r_mbytes_per_sec": 0, 00:09:39.024 "w_mbytes_per_sec": 0 00:09:39.024 }, 00:09:39.024 "claimed": false, 00:09:39.024 "zoned": false, 00:09:39.024 "supported_io_types": { 00:09:39.024 "read": true, 00:09:39.024 "write": true, 00:09:39.024 "unmap": false, 00:09:39.024 "flush": false, 00:09:39.024 "reset": true, 00:09:39.024 "nvme_admin": false, 00:09:39.024 "nvme_io": false, 00:09:39.024 "nvme_io_md": false, 00:09:39.024 "write_zeroes": true, 00:09:39.024 "zcopy": false, 00:09:39.024 "get_zone_info": false, 00:09:39.024 "zone_management": false, 00:09:39.024 "zone_append": false, 00:09:39.024 "compare": false, 00:09:39.024 "compare_and_write": false, 00:09:39.024 "abort": false, 00:09:39.024 "seek_hole": false, 00:09:39.024 "seek_data": false, 00:09:39.024 "copy": false, 00:09:39.024 "nvme_iov_md": false 00:09:39.024 }, 00:09:39.024 "memory_domains": [ 00:09:39.024 { 00:09:39.024 "dma_device_id": "system", 00:09:39.024 "dma_device_type": 1 00:09:39.024 }, 00:09:39.024 { 00:09:39.024 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:39.024 "dma_device_type": 2 00:09:39.024 }, 00:09:39.024 { 00:09:39.024 "dma_device_id": "system", 00:09:39.024 "dma_device_type": 1 00:09:39.024 }, 00:09:39.024 { 00:09:39.024 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:39.024 "dma_device_type": 2 00:09:39.024 }, 00:09:39.024 { 00:09:39.024 "dma_device_id": "system", 00:09:39.024 "dma_device_type": 1 00:09:39.024 }, 00:09:39.024 { 00:09:39.024 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:39.024 "dma_device_type": 2 00:09:39.024 } 00:09:39.024 ], 00:09:39.024 "driver_specific": { 00:09:39.024 "raid": { 00:09:39.024 "uuid": "1b9290b1-41c3-41f4-bd53-196a7ffa962a", 00:09:39.024 "strip_size_kb": 0, 00:09:39.024 "state": "online", 00:09:39.024 "raid_level": "raid1", 00:09:39.024 "superblock": true, 00:09:39.024 "num_base_bdevs": 3, 00:09:39.024 "num_base_bdevs_discovered": 3, 00:09:39.024 "num_base_bdevs_operational": 3, 00:09:39.024 "base_bdevs_list": [ 00:09:39.024 { 00:09:39.024 "name": "NewBaseBdev", 00:09:39.024 "uuid": "87211761-c3b3-4bbc-a637-2a510f8ffaa5", 00:09:39.024 "is_configured": true, 00:09:39.024 "data_offset": 2048, 00:09:39.024 "data_size": 63488 00:09:39.024 }, 00:09:39.024 { 00:09:39.024 "name": "BaseBdev2", 00:09:39.024 "uuid": "d3c5047c-fbcc-48c9-a14a-68538c46e366", 00:09:39.024 "is_configured": true, 00:09:39.024 "data_offset": 2048, 00:09:39.024 "data_size": 63488 00:09:39.024 }, 00:09:39.024 { 00:09:39.024 "name": "BaseBdev3", 00:09:39.024 "uuid": "02213ed4-fe8e-47a6-afdb-602e4a6853a5", 00:09:39.024 "is_configured": true, 00:09:39.024 "data_offset": 2048, 00:09:39.024 "data_size": 63488 00:09:39.024 } 00:09:39.024 ] 00:09:39.024 } 00:09:39.024 } 00:09:39.024 }' 00:09:39.024 13:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:39.024 13:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:39.024 BaseBdev2 00:09:39.024 BaseBdev3' 00:09:39.024 13:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:39.024 13:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:39.024 13:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:39.024 13:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:39.024 13:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:39.024 13:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.024 13:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.024 13:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.024 13:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:39.024 13:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:39.024 13:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:39.024 13:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:39.024 13:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.024 13:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.024 13:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:39.024 13:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.024 13:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:39.024 13:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:39.024 13:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:39.024 13:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:39.024 13:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:39.024 13:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.024 13:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.024 13:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.024 13:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:39.024 13:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:39.024 13:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:39.024 13:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.024 13:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.024 [2024-11-17 13:19:28.236301] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:39.024 [2024-11-17 13:19:28.236344] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:39.024 [2024-11-17 13:19:28.236438] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:39.024 [2024-11-17 13:19:28.236776] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:39.024 [2024-11-17 13:19:28.236791] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:09:39.024 13:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.024 13:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 67942 00:09:39.024 13:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 67942 ']' 00:09:39.024 13:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 67942 00:09:39.024 13:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:09:39.283 13:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:39.284 13:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67942 00:09:39.284 13:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:39.284 13:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:39.284 13:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67942' 00:09:39.284 killing process with pid 67942 00:09:39.284 13:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 67942 00:09:39.284 [2024-11-17 13:19:28.286207] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:39.284 13:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 67942 00:09:39.543 [2024-11-17 13:19:28.632749] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:40.922 13:19:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:09:40.922 00:09:40.922 real 0m10.958s 00:09:40.922 user 0m17.100s 00:09:40.922 sys 0m2.021s 00:09:40.922 13:19:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:40.922 13:19:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.922 ************************************ 00:09:40.922 END TEST raid_state_function_test_sb 00:09:40.922 ************************************ 00:09:40.922 13:19:29 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:09:40.922 13:19:29 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:40.922 13:19:29 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:40.922 13:19:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:40.922 ************************************ 00:09:40.922 START TEST raid_superblock_test 00:09:40.922 ************************************ 00:09:40.922 13:19:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 3 00:09:40.922 13:19:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:09:40.922 13:19:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:09:40.922 13:19:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:09:40.922 13:19:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:09:40.922 13:19:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:09:40.922 13:19:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:09:40.922 13:19:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:09:40.922 13:19:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:09:40.922 13:19:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:09:40.922 13:19:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:09:40.922 13:19:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:09:40.922 13:19:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:09:40.922 13:19:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:09:40.922 13:19:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:09:40.922 13:19:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:09:40.922 13:19:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=68568 00:09:40.922 13:19:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:09:40.922 13:19:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 68568 00:09:40.922 13:19:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 68568 ']' 00:09:40.922 13:19:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:40.922 13:19:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:40.922 13:19:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:40.922 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:40.922 13:19:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:40.922 13:19:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.922 [2024-11-17 13:19:30.067443] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:09:40.922 [2024-11-17 13:19:30.067689] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68568 ] 00:09:41.182 [2024-11-17 13:19:30.250844] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:41.182 [2024-11-17 13:19:30.396305] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:41.440 [2024-11-17 13:19:30.647727] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:41.440 [2024-11-17 13:19:30.647825] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:41.699 13:19:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:41.699 13:19:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:09:41.699 13:19:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:09:41.699 13:19:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:41.699 13:19:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:09:41.699 13:19:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:09:41.699 13:19:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:41.699 13:19:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:41.699 13:19:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:41.699 13:19:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:41.699 13:19:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:09:41.699 13:19:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.699 13:19:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.960 malloc1 00:09:41.960 13:19:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.960 13:19:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:41.960 13:19:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.960 13:19:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.960 [2024-11-17 13:19:30.938831] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:41.960 [2024-11-17 13:19:30.938999] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:41.960 [2024-11-17 13:19:30.939047] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:41.960 [2024-11-17 13:19:30.939099] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:41.960 [2024-11-17 13:19:30.941684] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:41.960 [2024-11-17 13:19:30.941770] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:41.960 pt1 00:09:41.960 13:19:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.960 13:19:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:41.960 13:19:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:41.960 13:19:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:09:41.960 13:19:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:09:41.960 13:19:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:41.960 13:19:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:41.960 13:19:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:41.960 13:19:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:41.960 13:19:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:09:41.960 13:19:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.960 13:19:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.960 malloc2 00:09:41.960 13:19:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.960 13:19:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:41.960 13:19:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.960 13:19:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.960 [2024-11-17 13:19:31.007376] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:41.960 [2024-11-17 13:19:31.007459] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:41.960 [2024-11-17 13:19:31.007489] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:41.960 [2024-11-17 13:19:31.007501] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:41.960 [2024-11-17 13:19:31.010006] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:41.960 [2024-11-17 13:19:31.010054] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:41.960 pt2 00:09:41.960 13:19:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.960 13:19:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:41.960 13:19:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:41.960 13:19:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:09:41.960 13:19:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:09:41.960 13:19:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:09:41.960 13:19:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:41.960 13:19:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:41.960 13:19:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:41.960 13:19:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:09:41.960 13:19:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.960 13:19:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.960 malloc3 00:09:41.960 13:19:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.960 13:19:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:41.960 13:19:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.960 13:19:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.960 [2024-11-17 13:19:31.082395] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:41.960 [2024-11-17 13:19:31.082542] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:41.960 [2024-11-17 13:19:31.082586] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:09:41.960 [2024-11-17 13:19:31.082624] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:41.960 [2024-11-17 13:19:31.085164] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:41.960 [2024-11-17 13:19:31.085272] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:41.960 pt3 00:09:41.960 13:19:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.960 13:19:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:41.960 13:19:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:41.960 13:19:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:09:41.960 13:19:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.960 13:19:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.960 [2024-11-17 13:19:31.094468] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:41.960 [2024-11-17 13:19:31.096734] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:41.960 [2024-11-17 13:19:31.096878] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:41.960 [2024-11-17 13:19:31.097117] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:09:41.960 [2024-11-17 13:19:31.097179] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:41.960 [2024-11-17 13:19:31.097545] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:41.960 [2024-11-17 13:19:31.097799] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:09:41.960 [2024-11-17 13:19:31.097852] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:09:41.960 [2024-11-17 13:19:31.098127] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:41.960 13:19:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.960 13:19:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:09:41.960 13:19:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:41.960 13:19:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:41.960 13:19:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:41.960 13:19:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:41.960 13:19:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:41.960 13:19:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:41.960 13:19:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:41.960 13:19:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:41.960 13:19:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:41.960 13:19:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:41.960 13:19:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:41.960 13:19:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.960 13:19:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.960 13:19:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.960 13:19:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:41.960 "name": "raid_bdev1", 00:09:41.960 "uuid": "4737b090-d45f-4802-868e-3c55e632948f", 00:09:41.960 "strip_size_kb": 0, 00:09:41.960 "state": "online", 00:09:41.960 "raid_level": "raid1", 00:09:41.960 "superblock": true, 00:09:41.960 "num_base_bdevs": 3, 00:09:41.960 "num_base_bdevs_discovered": 3, 00:09:41.960 "num_base_bdevs_operational": 3, 00:09:41.960 "base_bdevs_list": [ 00:09:41.960 { 00:09:41.960 "name": "pt1", 00:09:41.960 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:41.960 "is_configured": true, 00:09:41.960 "data_offset": 2048, 00:09:41.960 "data_size": 63488 00:09:41.960 }, 00:09:41.960 { 00:09:41.960 "name": "pt2", 00:09:41.960 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:41.960 "is_configured": true, 00:09:41.960 "data_offset": 2048, 00:09:41.960 "data_size": 63488 00:09:41.960 }, 00:09:41.960 { 00:09:41.960 "name": "pt3", 00:09:41.960 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:41.960 "is_configured": true, 00:09:41.960 "data_offset": 2048, 00:09:41.960 "data_size": 63488 00:09:41.960 } 00:09:41.960 ] 00:09:41.960 }' 00:09:41.961 13:19:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:41.961 13:19:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.528 13:19:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:09:42.528 13:19:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:42.528 13:19:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:42.528 13:19:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:42.528 13:19:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:42.528 13:19:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:42.528 13:19:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:42.528 13:19:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.528 13:19:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:42.528 13:19:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.528 [2024-11-17 13:19:31.546071] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:42.528 13:19:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.528 13:19:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:42.528 "name": "raid_bdev1", 00:09:42.528 "aliases": [ 00:09:42.529 "4737b090-d45f-4802-868e-3c55e632948f" 00:09:42.529 ], 00:09:42.529 "product_name": "Raid Volume", 00:09:42.529 "block_size": 512, 00:09:42.529 "num_blocks": 63488, 00:09:42.529 "uuid": "4737b090-d45f-4802-868e-3c55e632948f", 00:09:42.529 "assigned_rate_limits": { 00:09:42.529 "rw_ios_per_sec": 0, 00:09:42.529 "rw_mbytes_per_sec": 0, 00:09:42.529 "r_mbytes_per_sec": 0, 00:09:42.529 "w_mbytes_per_sec": 0 00:09:42.529 }, 00:09:42.529 "claimed": false, 00:09:42.529 "zoned": false, 00:09:42.529 "supported_io_types": { 00:09:42.529 "read": true, 00:09:42.529 "write": true, 00:09:42.529 "unmap": false, 00:09:42.529 "flush": false, 00:09:42.529 "reset": true, 00:09:42.529 "nvme_admin": false, 00:09:42.529 "nvme_io": false, 00:09:42.529 "nvme_io_md": false, 00:09:42.529 "write_zeroes": true, 00:09:42.529 "zcopy": false, 00:09:42.529 "get_zone_info": false, 00:09:42.529 "zone_management": false, 00:09:42.529 "zone_append": false, 00:09:42.529 "compare": false, 00:09:42.529 "compare_and_write": false, 00:09:42.529 "abort": false, 00:09:42.529 "seek_hole": false, 00:09:42.529 "seek_data": false, 00:09:42.529 "copy": false, 00:09:42.529 "nvme_iov_md": false 00:09:42.529 }, 00:09:42.529 "memory_domains": [ 00:09:42.529 { 00:09:42.529 "dma_device_id": "system", 00:09:42.529 "dma_device_type": 1 00:09:42.529 }, 00:09:42.529 { 00:09:42.529 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:42.529 "dma_device_type": 2 00:09:42.529 }, 00:09:42.529 { 00:09:42.529 "dma_device_id": "system", 00:09:42.529 "dma_device_type": 1 00:09:42.529 }, 00:09:42.529 { 00:09:42.529 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:42.529 "dma_device_type": 2 00:09:42.529 }, 00:09:42.529 { 00:09:42.529 "dma_device_id": "system", 00:09:42.529 "dma_device_type": 1 00:09:42.529 }, 00:09:42.529 { 00:09:42.529 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:42.529 "dma_device_type": 2 00:09:42.529 } 00:09:42.529 ], 00:09:42.529 "driver_specific": { 00:09:42.529 "raid": { 00:09:42.529 "uuid": "4737b090-d45f-4802-868e-3c55e632948f", 00:09:42.529 "strip_size_kb": 0, 00:09:42.529 "state": "online", 00:09:42.529 "raid_level": "raid1", 00:09:42.529 "superblock": true, 00:09:42.529 "num_base_bdevs": 3, 00:09:42.529 "num_base_bdevs_discovered": 3, 00:09:42.529 "num_base_bdevs_operational": 3, 00:09:42.529 "base_bdevs_list": [ 00:09:42.529 { 00:09:42.529 "name": "pt1", 00:09:42.529 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:42.529 "is_configured": true, 00:09:42.529 "data_offset": 2048, 00:09:42.529 "data_size": 63488 00:09:42.529 }, 00:09:42.529 { 00:09:42.529 "name": "pt2", 00:09:42.529 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:42.529 "is_configured": true, 00:09:42.529 "data_offset": 2048, 00:09:42.529 "data_size": 63488 00:09:42.529 }, 00:09:42.529 { 00:09:42.529 "name": "pt3", 00:09:42.529 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:42.529 "is_configured": true, 00:09:42.529 "data_offset": 2048, 00:09:42.529 "data_size": 63488 00:09:42.529 } 00:09:42.529 ] 00:09:42.529 } 00:09:42.529 } 00:09:42.529 }' 00:09:42.529 13:19:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:42.529 13:19:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:42.529 pt2 00:09:42.529 pt3' 00:09:42.529 13:19:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:42.529 13:19:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:42.529 13:19:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:42.529 13:19:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:42.529 13:19:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:42.529 13:19:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.529 13:19:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.529 13:19:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.529 13:19:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:42.529 13:19:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:42.529 13:19:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:42.529 13:19:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:42.529 13:19:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:42.529 13:19:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.529 13:19:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.529 13:19:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.789 13:19:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:42.789 13:19:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:42.789 13:19:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:42.789 13:19:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:42.789 13:19:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.789 13:19:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.789 13:19:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:42.789 13:19:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.789 13:19:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:42.789 13:19:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:42.789 13:19:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:42.789 13:19:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:09:42.789 13:19:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.789 13:19:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.789 [2024-11-17 13:19:31.833555] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:42.789 13:19:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.789 13:19:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=4737b090-d45f-4802-868e-3c55e632948f 00:09:42.789 13:19:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 4737b090-d45f-4802-868e-3c55e632948f ']' 00:09:42.789 13:19:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:42.789 13:19:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.789 13:19:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.789 [2024-11-17 13:19:31.881136] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:42.789 [2024-11-17 13:19:31.881260] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:42.789 [2024-11-17 13:19:31.881442] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:42.789 [2024-11-17 13:19:31.881590] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:42.789 [2024-11-17 13:19:31.881648] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:09:42.789 13:19:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.789 13:19:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:42.789 13:19:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:09:42.789 13:19:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.789 13:19:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.789 13:19:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.789 13:19:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:09:42.789 13:19:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:09:42.789 13:19:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:42.789 13:19:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:09:42.789 13:19:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.789 13:19:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.789 13:19:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.789 13:19:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:42.789 13:19:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:09:42.789 13:19:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.789 13:19:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.789 13:19:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.789 13:19:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:42.789 13:19:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:09:42.789 13:19:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.789 13:19:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.789 13:19:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.789 13:19:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:09:42.789 13:19:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:42.789 13:19:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.790 13:19:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.790 13:19:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.050 13:19:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:09:43.050 13:19:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:43.050 13:19:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:09:43.050 13:19:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:43.050 13:19:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:09:43.050 13:19:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:43.050 13:19:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:09:43.050 13:19:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:43.050 13:19:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:43.050 13:19:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.050 13:19:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.050 [2024-11-17 13:19:32.025002] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:43.050 [2024-11-17 13:19:32.027262] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:43.050 [2024-11-17 13:19:32.027400] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:09:43.050 [2024-11-17 13:19:32.027469] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:43.050 [2024-11-17 13:19:32.027530] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:43.050 [2024-11-17 13:19:32.027553] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:09:43.050 [2024-11-17 13:19:32.027573] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:43.050 [2024-11-17 13:19:32.027598] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:09:43.050 request: 00:09:43.050 { 00:09:43.050 "name": "raid_bdev1", 00:09:43.050 "raid_level": "raid1", 00:09:43.050 "base_bdevs": [ 00:09:43.050 "malloc1", 00:09:43.050 "malloc2", 00:09:43.050 "malloc3" 00:09:43.050 ], 00:09:43.050 "superblock": false, 00:09:43.050 "method": "bdev_raid_create", 00:09:43.050 "req_id": 1 00:09:43.050 } 00:09:43.050 Got JSON-RPC error response 00:09:43.050 response: 00:09:43.050 { 00:09:43.050 "code": -17, 00:09:43.050 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:43.050 } 00:09:43.050 13:19:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:43.050 13:19:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:09:43.050 13:19:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:43.050 13:19:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:43.050 13:19:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:43.050 13:19:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:43.050 13:19:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.050 13:19:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.050 13:19:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:09:43.050 13:19:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.050 13:19:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:09:43.050 13:19:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:09:43.050 13:19:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:43.050 13:19:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.050 13:19:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.050 [2024-11-17 13:19:32.088852] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:43.050 [2024-11-17 13:19:32.088931] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:43.051 [2024-11-17 13:19:32.088965] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:09:43.051 [2024-11-17 13:19:32.088978] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:43.051 [2024-11-17 13:19:32.091764] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:43.051 [2024-11-17 13:19:32.091807] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:43.051 [2024-11-17 13:19:32.091912] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:43.051 [2024-11-17 13:19:32.091988] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:43.051 pt1 00:09:43.051 13:19:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.051 13:19:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:09:43.051 13:19:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:43.051 13:19:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:43.051 13:19:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:43.051 13:19:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:43.051 13:19:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:43.051 13:19:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:43.051 13:19:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:43.051 13:19:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:43.051 13:19:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:43.051 13:19:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:43.051 13:19:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:43.051 13:19:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.051 13:19:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.051 13:19:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.051 13:19:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:43.051 "name": "raid_bdev1", 00:09:43.051 "uuid": "4737b090-d45f-4802-868e-3c55e632948f", 00:09:43.051 "strip_size_kb": 0, 00:09:43.051 "state": "configuring", 00:09:43.051 "raid_level": "raid1", 00:09:43.051 "superblock": true, 00:09:43.051 "num_base_bdevs": 3, 00:09:43.051 "num_base_bdevs_discovered": 1, 00:09:43.051 "num_base_bdevs_operational": 3, 00:09:43.051 "base_bdevs_list": [ 00:09:43.051 { 00:09:43.051 "name": "pt1", 00:09:43.051 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:43.051 "is_configured": true, 00:09:43.051 "data_offset": 2048, 00:09:43.051 "data_size": 63488 00:09:43.051 }, 00:09:43.051 { 00:09:43.051 "name": null, 00:09:43.051 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:43.051 "is_configured": false, 00:09:43.051 "data_offset": 2048, 00:09:43.051 "data_size": 63488 00:09:43.051 }, 00:09:43.051 { 00:09:43.051 "name": null, 00:09:43.051 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:43.051 "is_configured": false, 00:09:43.051 "data_offset": 2048, 00:09:43.051 "data_size": 63488 00:09:43.051 } 00:09:43.051 ] 00:09:43.051 }' 00:09:43.051 13:19:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:43.051 13:19:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.311 13:19:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:09:43.311 13:19:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:43.311 13:19:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.311 13:19:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.311 [2024-11-17 13:19:32.528121] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:43.311 [2024-11-17 13:19:32.528311] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:43.311 [2024-11-17 13:19:32.528364] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:09:43.311 [2024-11-17 13:19:32.528399] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:43.311 [2024-11-17 13:19:32.529077] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:43.311 [2024-11-17 13:19:32.529165] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:43.311 [2024-11-17 13:19:32.529365] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:43.311 [2024-11-17 13:19:32.529435] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:43.311 pt2 00:09:43.311 13:19:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.311 13:19:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:09:43.311 13:19:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.311 13:19:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.569 [2024-11-17 13:19:32.536086] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:09:43.569 13:19:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.569 13:19:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:09:43.569 13:19:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:43.569 13:19:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:43.569 13:19:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:43.569 13:19:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:43.569 13:19:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:43.569 13:19:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:43.569 13:19:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:43.569 13:19:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:43.569 13:19:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:43.569 13:19:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:43.569 13:19:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:43.569 13:19:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.569 13:19:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.569 13:19:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.569 13:19:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:43.569 "name": "raid_bdev1", 00:09:43.569 "uuid": "4737b090-d45f-4802-868e-3c55e632948f", 00:09:43.569 "strip_size_kb": 0, 00:09:43.569 "state": "configuring", 00:09:43.569 "raid_level": "raid1", 00:09:43.569 "superblock": true, 00:09:43.569 "num_base_bdevs": 3, 00:09:43.569 "num_base_bdevs_discovered": 1, 00:09:43.569 "num_base_bdevs_operational": 3, 00:09:43.569 "base_bdevs_list": [ 00:09:43.569 { 00:09:43.569 "name": "pt1", 00:09:43.569 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:43.570 "is_configured": true, 00:09:43.570 "data_offset": 2048, 00:09:43.570 "data_size": 63488 00:09:43.570 }, 00:09:43.570 { 00:09:43.570 "name": null, 00:09:43.570 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:43.570 "is_configured": false, 00:09:43.570 "data_offset": 0, 00:09:43.570 "data_size": 63488 00:09:43.570 }, 00:09:43.570 { 00:09:43.570 "name": null, 00:09:43.570 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:43.570 "is_configured": false, 00:09:43.570 "data_offset": 2048, 00:09:43.570 "data_size": 63488 00:09:43.570 } 00:09:43.570 ] 00:09:43.570 }' 00:09:43.570 13:19:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:43.570 13:19:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.829 13:19:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:09:43.829 13:19:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:43.829 13:19:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:43.829 13:19:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.829 13:19:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.829 [2024-11-17 13:19:32.991423] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:43.829 [2024-11-17 13:19:32.991591] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:43.829 [2024-11-17 13:19:32.991637] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:09:43.829 [2024-11-17 13:19:32.991690] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:43.829 [2024-11-17 13:19:32.992392] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:43.829 [2024-11-17 13:19:32.992479] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:43.829 [2024-11-17 13:19:32.992651] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:43.829 [2024-11-17 13:19:32.992760] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:43.829 pt2 00:09:43.829 13:19:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.829 13:19:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:43.829 13:19:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:43.829 13:19:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:43.829 13:19:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.829 13:19:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.829 [2024-11-17 13:19:33.007337] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:43.829 [2024-11-17 13:19:33.007440] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:43.829 [2024-11-17 13:19:33.007484] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:09:43.829 [2024-11-17 13:19:33.007525] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:43.829 [2024-11-17 13:19:33.008029] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:43.829 [2024-11-17 13:19:33.008100] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:43.829 [2024-11-17 13:19:33.008252] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:43.829 [2024-11-17 13:19:33.008316] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:43.829 [2024-11-17 13:19:33.008534] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:43.829 [2024-11-17 13:19:33.008587] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:43.829 [2024-11-17 13:19:33.008924] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:43.829 [2024-11-17 13:19:33.009171] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:43.829 [2024-11-17 13:19:33.009237] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:43.829 [2024-11-17 13:19:33.009491] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:43.829 pt3 00:09:43.829 13:19:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.829 13:19:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:43.829 13:19:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:43.829 13:19:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:09:43.829 13:19:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:43.829 13:19:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:43.829 13:19:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:43.829 13:19:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:43.830 13:19:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:43.830 13:19:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:43.830 13:19:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:43.830 13:19:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:43.830 13:19:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:43.830 13:19:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:43.830 13:19:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:43.830 13:19:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.830 13:19:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.830 13:19:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.089 13:19:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:44.089 "name": "raid_bdev1", 00:09:44.089 "uuid": "4737b090-d45f-4802-868e-3c55e632948f", 00:09:44.089 "strip_size_kb": 0, 00:09:44.089 "state": "online", 00:09:44.089 "raid_level": "raid1", 00:09:44.089 "superblock": true, 00:09:44.089 "num_base_bdevs": 3, 00:09:44.089 "num_base_bdevs_discovered": 3, 00:09:44.089 "num_base_bdevs_operational": 3, 00:09:44.089 "base_bdevs_list": [ 00:09:44.089 { 00:09:44.089 "name": "pt1", 00:09:44.089 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:44.089 "is_configured": true, 00:09:44.089 "data_offset": 2048, 00:09:44.089 "data_size": 63488 00:09:44.089 }, 00:09:44.089 { 00:09:44.089 "name": "pt2", 00:09:44.089 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:44.089 "is_configured": true, 00:09:44.089 "data_offset": 2048, 00:09:44.089 "data_size": 63488 00:09:44.089 }, 00:09:44.089 { 00:09:44.089 "name": "pt3", 00:09:44.089 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:44.089 "is_configured": true, 00:09:44.089 "data_offset": 2048, 00:09:44.089 "data_size": 63488 00:09:44.089 } 00:09:44.089 ] 00:09:44.089 }' 00:09:44.089 13:19:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:44.089 13:19:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.349 13:19:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:09:44.349 13:19:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:44.349 13:19:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:44.349 13:19:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:44.349 13:19:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:44.349 13:19:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:44.349 13:19:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:44.349 13:19:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:44.349 13:19:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.349 13:19:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.349 [2024-11-17 13:19:33.422991] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:44.349 13:19:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.349 13:19:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:44.349 "name": "raid_bdev1", 00:09:44.349 "aliases": [ 00:09:44.349 "4737b090-d45f-4802-868e-3c55e632948f" 00:09:44.349 ], 00:09:44.349 "product_name": "Raid Volume", 00:09:44.349 "block_size": 512, 00:09:44.349 "num_blocks": 63488, 00:09:44.349 "uuid": "4737b090-d45f-4802-868e-3c55e632948f", 00:09:44.349 "assigned_rate_limits": { 00:09:44.349 "rw_ios_per_sec": 0, 00:09:44.349 "rw_mbytes_per_sec": 0, 00:09:44.349 "r_mbytes_per_sec": 0, 00:09:44.349 "w_mbytes_per_sec": 0 00:09:44.349 }, 00:09:44.349 "claimed": false, 00:09:44.349 "zoned": false, 00:09:44.349 "supported_io_types": { 00:09:44.349 "read": true, 00:09:44.349 "write": true, 00:09:44.349 "unmap": false, 00:09:44.349 "flush": false, 00:09:44.349 "reset": true, 00:09:44.349 "nvme_admin": false, 00:09:44.349 "nvme_io": false, 00:09:44.349 "nvme_io_md": false, 00:09:44.349 "write_zeroes": true, 00:09:44.349 "zcopy": false, 00:09:44.349 "get_zone_info": false, 00:09:44.349 "zone_management": false, 00:09:44.349 "zone_append": false, 00:09:44.349 "compare": false, 00:09:44.349 "compare_and_write": false, 00:09:44.349 "abort": false, 00:09:44.349 "seek_hole": false, 00:09:44.349 "seek_data": false, 00:09:44.349 "copy": false, 00:09:44.349 "nvme_iov_md": false 00:09:44.349 }, 00:09:44.349 "memory_domains": [ 00:09:44.350 { 00:09:44.350 "dma_device_id": "system", 00:09:44.350 "dma_device_type": 1 00:09:44.350 }, 00:09:44.350 { 00:09:44.350 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:44.350 "dma_device_type": 2 00:09:44.350 }, 00:09:44.350 { 00:09:44.350 "dma_device_id": "system", 00:09:44.350 "dma_device_type": 1 00:09:44.350 }, 00:09:44.350 { 00:09:44.350 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:44.350 "dma_device_type": 2 00:09:44.350 }, 00:09:44.350 { 00:09:44.350 "dma_device_id": "system", 00:09:44.350 "dma_device_type": 1 00:09:44.350 }, 00:09:44.350 { 00:09:44.350 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:44.350 "dma_device_type": 2 00:09:44.350 } 00:09:44.350 ], 00:09:44.350 "driver_specific": { 00:09:44.350 "raid": { 00:09:44.350 "uuid": "4737b090-d45f-4802-868e-3c55e632948f", 00:09:44.350 "strip_size_kb": 0, 00:09:44.350 "state": "online", 00:09:44.350 "raid_level": "raid1", 00:09:44.350 "superblock": true, 00:09:44.350 "num_base_bdevs": 3, 00:09:44.350 "num_base_bdevs_discovered": 3, 00:09:44.350 "num_base_bdevs_operational": 3, 00:09:44.350 "base_bdevs_list": [ 00:09:44.350 { 00:09:44.350 "name": "pt1", 00:09:44.350 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:44.350 "is_configured": true, 00:09:44.350 "data_offset": 2048, 00:09:44.350 "data_size": 63488 00:09:44.350 }, 00:09:44.350 { 00:09:44.350 "name": "pt2", 00:09:44.350 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:44.350 "is_configured": true, 00:09:44.350 "data_offset": 2048, 00:09:44.350 "data_size": 63488 00:09:44.350 }, 00:09:44.350 { 00:09:44.350 "name": "pt3", 00:09:44.350 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:44.350 "is_configured": true, 00:09:44.350 "data_offset": 2048, 00:09:44.350 "data_size": 63488 00:09:44.350 } 00:09:44.350 ] 00:09:44.350 } 00:09:44.350 } 00:09:44.350 }' 00:09:44.350 13:19:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:44.350 13:19:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:44.350 pt2 00:09:44.350 pt3' 00:09:44.350 13:19:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:44.350 13:19:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:44.350 13:19:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:44.350 13:19:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:44.350 13:19:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:44.350 13:19:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.350 13:19:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.610 13:19:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.610 13:19:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:44.610 13:19:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:44.610 13:19:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:44.610 13:19:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:44.610 13:19:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.610 13:19:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.610 13:19:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:44.610 13:19:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.610 13:19:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:44.610 13:19:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:44.610 13:19:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:44.610 13:19:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:44.610 13:19:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:44.610 13:19:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.610 13:19:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.610 13:19:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.610 13:19:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:44.610 13:19:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:44.610 13:19:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:44.610 13:19:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:09:44.610 13:19:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.610 13:19:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.610 [2024-11-17 13:19:33.718400] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:44.610 13:19:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.610 13:19:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 4737b090-d45f-4802-868e-3c55e632948f '!=' 4737b090-d45f-4802-868e-3c55e632948f ']' 00:09:44.610 13:19:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:09:44.610 13:19:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:44.610 13:19:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:44.610 13:19:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:09:44.610 13:19:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.610 13:19:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.610 [2024-11-17 13:19:33.762110] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:09:44.610 13:19:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.610 13:19:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:44.610 13:19:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:44.610 13:19:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:44.610 13:19:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:44.610 13:19:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:44.610 13:19:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:44.610 13:19:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:44.610 13:19:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:44.610 13:19:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:44.610 13:19:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:44.610 13:19:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:44.610 13:19:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:44.610 13:19:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.610 13:19:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.610 13:19:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.610 13:19:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:44.610 "name": "raid_bdev1", 00:09:44.611 "uuid": "4737b090-d45f-4802-868e-3c55e632948f", 00:09:44.611 "strip_size_kb": 0, 00:09:44.611 "state": "online", 00:09:44.611 "raid_level": "raid1", 00:09:44.611 "superblock": true, 00:09:44.611 "num_base_bdevs": 3, 00:09:44.611 "num_base_bdevs_discovered": 2, 00:09:44.611 "num_base_bdevs_operational": 2, 00:09:44.611 "base_bdevs_list": [ 00:09:44.611 { 00:09:44.611 "name": null, 00:09:44.611 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:44.611 "is_configured": false, 00:09:44.611 "data_offset": 0, 00:09:44.611 "data_size": 63488 00:09:44.611 }, 00:09:44.611 { 00:09:44.611 "name": "pt2", 00:09:44.611 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:44.611 "is_configured": true, 00:09:44.611 "data_offset": 2048, 00:09:44.611 "data_size": 63488 00:09:44.611 }, 00:09:44.611 { 00:09:44.611 "name": "pt3", 00:09:44.611 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:44.611 "is_configured": true, 00:09:44.611 "data_offset": 2048, 00:09:44.611 "data_size": 63488 00:09:44.611 } 00:09:44.611 ] 00:09:44.611 }' 00:09:44.611 13:19:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:44.611 13:19:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.182 13:19:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:45.182 13:19:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.182 13:19:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.182 [2024-11-17 13:19:34.269261] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:45.182 [2024-11-17 13:19:34.269308] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:45.182 [2024-11-17 13:19:34.269427] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:45.182 [2024-11-17 13:19:34.269510] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:45.182 [2024-11-17 13:19:34.269530] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:45.182 13:19:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.182 13:19:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:45.182 13:19:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:09:45.182 13:19:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.182 13:19:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.182 13:19:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.182 13:19:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:09:45.182 13:19:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:09:45.182 13:19:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:09:45.182 13:19:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:09:45.182 13:19:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:09:45.182 13:19:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.182 13:19:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.182 13:19:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.182 13:19:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:09:45.182 13:19:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:09:45.182 13:19:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:09:45.182 13:19:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.182 13:19:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.182 13:19:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.182 13:19:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:09:45.182 13:19:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:09:45.182 13:19:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:09:45.182 13:19:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:09:45.182 13:19:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:45.182 13:19:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.182 13:19:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.182 [2024-11-17 13:19:34.357016] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:45.182 [2024-11-17 13:19:34.357105] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:45.182 [2024-11-17 13:19:34.357131] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:09:45.182 [2024-11-17 13:19:34.357147] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:45.182 [2024-11-17 13:19:34.360068] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:45.182 [2024-11-17 13:19:34.360194] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:45.182 [2024-11-17 13:19:34.360330] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:45.182 [2024-11-17 13:19:34.360411] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:45.182 pt2 00:09:45.182 13:19:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.182 13:19:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:09:45.182 13:19:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:45.182 13:19:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:45.182 13:19:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:45.182 13:19:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:45.182 13:19:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:45.182 13:19:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:45.182 13:19:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:45.182 13:19:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:45.182 13:19:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:45.182 13:19:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:45.182 13:19:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:45.182 13:19:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.182 13:19:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.182 13:19:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.442 13:19:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:45.442 "name": "raid_bdev1", 00:09:45.442 "uuid": "4737b090-d45f-4802-868e-3c55e632948f", 00:09:45.442 "strip_size_kb": 0, 00:09:45.442 "state": "configuring", 00:09:45.442 "raid_level": "raid1", 00:09:45.442 "superblock": true, 00:09:45.442 "num_base_bdevs": 3, 00:09:45.442 "num_base_bdevs_discovered": 1, 00:09:45.442 "num_base_bdevs_operational": 2, 00:09:45.442 "base_bdevs_list": [ 00:09:45.442 { 00:09:45.442 "name": null, 00:09:45.442 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:45.442 "is_configured": false, 00:09:45.442 "data_offset": 2048, 00:09:45.442 "data_size": 63488 00:09:45.442 }, 00:09:45.442 { 00:09:45.442 "name": "pt2", 00:09:45.442 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:45.442 "is_configured": true, 00:09:45.442 "data_offset": 2048, 00:09:45.442 "data_size": 63488 00:09:45.442 }, 00:09:45.442 { 00:09:45.442 "name": null, 00:09:45.442 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:45.442 "is_configured": false, 00:09:45.442 "data_offset": 2048, 00:09:45.442 "data_size": 63488 00:09:45.442 } 00:09:45.442 ] 00:09:45.442 }' 00:09:45.442 13:19:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:45.442 13:19:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.702 13:19:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:09:45.702 13:19:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:09:45.702 13:19:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:09:45.702 13:19:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:45.702 13:19:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.702 13:19:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.702 [2024-11-17 13:19:34.848268] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:45.702 [2024-11-17 13:19:34.848444] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:45.702 [2024-11-17 13:19:34.848494] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:09:45.702 [2024-11-17 13:19:34.848559] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:45.702 [2024-11-17 13:19:34.849291] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:45.702 [2024-11-17 13:19:34.849372] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:45.702 [2024-11-17 13:19:34.849564] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:45.702 [2024-11-17 13:19:34.849647] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:45.702 [2024-11-17 13:19:34.849834] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:45.702 [2024-11-17 13:19:34.849885] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:45.702 [2024-11-17 13:19:34.850265] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:09:45.702 [2024-11-17 13:19:34.850511] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:45.702 [2024-11-17 13:19:34.850559] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:45.702 [2024-11-17 13:19:34.850848] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:45.702 pt3 00:09:45.702 13:19:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.702 13:19:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:45.702 13:19:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:45.702 13:19:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:45.702 13:19:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:45.702 13:19:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:45.702 13:19:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:45.702 13:19:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:45.702 13:19:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:45.702 13:19:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:45.702 13:19:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:45.702 13:19:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:45.702 13:19:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.702 13:19:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.702 13:19:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:45.702 13:19:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.702 13:19:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:45.702 "name": "raid_bdev1", 00:09:45.702 "uuid": "4737b090-d45f-4802-868e-3c55e632948f", 00:09:45.702 "strip_size_kb": 0, 00:09:45.702 "state": "online", 00:09:45.702 "raid_level": "raid1", 00:09:45.702 "superblock": true, 00:09:45.702 "num_base_bdevs": 3, 00:09:45.702 "num_base_bdevs_discovered": 2, 00:09:45.702 "num_base_bdevs_operational": 2, 00:09:45.702 "base_bdevs_list": [ 00:09:45.702 { 00:09:45.702 "name": null, 00:09:45.702 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:45.702 "is_configured": false, 00:09:45.702 "data_offset": 2048, 00:09:45.702 "data_size": 63488 00:09:45.702 }, 00:09:45.702 { 00:09:45.702 "name": "pt2", 00:09:45.702 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:45.702 "is_configured": true, 00:09:45.702 "data_offset": 2048, 00:09:45.702 "data_size": 63488 00:09:45.702 }, 00:09:45.702 { 00:09:45.702 "name": "pt3", 00:09:45.702 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:45.702 "is_configured": true, 00:09:45.702 "data_offset": 2048, 00:09:45.702 "data_size": 63488 00:09:45.702 } 00:09:45.702 ] 00:09:45.702 }' 00:09:45.702 13:19:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:45.702 13:19:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.273 13:19:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:46.273 13:19:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.273 13:19:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.273 [2024-11-17 13:19:35.311424] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:46.273 [2024-11-17 13:19:35.311476] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:46.273 [2024-11-17 13:19:35.311600] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:46.274 [2024-11-17 13:19:35.311682] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:46.274 [2024-11-17 13:19:35.311694] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:46.274 13:19:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.274 13:19:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:09:46.274 13:19:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:46.274 13:19:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.274 13:19:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.274 13:19:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.274 13:19:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:09:46.274 13:19:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:09:46.274 13:19:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:09:46.274 13:19:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:09:46.274 13:19:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:09:46.274 13:19:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.274 13:19:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.274 13:19:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.274 13:19:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:46.274 13:19:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.274 13:19:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.274 [2024-11-17 13:19:35.387303] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:46.274 [2024-11-17 13:19:35.387385] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:46.274 [2024-11-17 13:19:35.387415] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:09:46.274 [2024-11-17 13:19:35.387428] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:46.274 [2024-11-17 13:19:35.390221] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:46.274 [2024-11-17 13:19:35.390266] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:46.274 [2024-11-17 13:19:35.390379] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:46.274 [2024-11-17 13:19:35.390441] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:46.274 [2024-11-17 13:19:35.390596] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:09:46.274 [2024-11-17 13:19:35.390618] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:46.274 [2024-11-17 13:19:35.390640] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:09:46.274 [2024-11-17 13:19:35.390700] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:46.274 pt1 00:09:46.274 13:19:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.274 13:19:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:09:46.274 13:19:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:09:46.274 13:19:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:46.274 13:19:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:46.274 13:19:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:46.274 13:19:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:46.274 13:19:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:46.274 13:19:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:46.274 13:19:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:46.274 13:19:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:46.274 13:19:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:46.274 13:19:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:46.274 13:19:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.274 13:19:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.274 13:19:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:46.274 13:19:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.274 13:19:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:46.274 "name": "raid_bdev1", 00:09:46.274 "uuid": "4737b090-d45f-4802-868e-3c55e632948f", 00:09:46.274 "strip_size_kb": 0, 00:09:46.274 "state": "configuring", 00:09:46.274 "raid_level": "raid1", 00:09:46.274 "superblock": true, 00:09:46.274 "num_base_bdevs": 3, 00:09:46.274 "num_base_bdevs_discovered": 1, 00:09:46.274 "num_base_bdevs_operational": 2, 00:09:46.274 "base_bdevs_list": [ 00:09:46.274 { 00:09:46.274 "name": null, 00:09:46.274 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:46.274 "is_configured": false, 00:09:46.274 "data_offset": 2048, 00:09:46.274 "data_size": 63488 00:09:46.274 }, 00:09:46.274 { 00:09:46.274 "name": "pt2", 00:09:46.274 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:46.274 "is_configured": true, 00:09:46.274 "data_offset": 2048, 00:09:46.274 "data_size": 63488 00:09:46.274 }, 00:09:46.274 { 00:09:46.274 "name": null, 00:09:46.274 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:46.274 "is_configured": false, 00:09:46.274 "data_offset": 2048, 00:09:46.274 "data_size": 63488 00:09:46.274 } 00:09:46.274 ] 00:09:46.274 }' 00:09:46.274 13:19:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:46.274 13:19:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.845 13:19:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:09:46.845 13:19:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.845 13:19:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.845 13:19:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:09:46.845 13:19:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.845 13:19:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:09:46.845 13:19:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:46.845 13:19:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.845 13:19:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.845 [2024-11-17 13:19:35.862772] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:46.845 [2024-11-17 13:19:35.863128] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:46.845 [2024-11-17 13:19:35.863395] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:09:46.845 [2024-11-17 13:19:35.863553] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:46.845 [2024-11-17 13:19:35.865268] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:46.845 [2024-11-17 13:19:35.865486] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:46.845 [2024-11-17 13:19:35.865902] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:46.845 [2024-11-17 13:19:35.866189] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:46.845 [2024-11-17 13:19:35.866871] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:09:46.845 [2024-11-17 13:19:35.867023] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:46.845 [2024-11-17 13:19:35.868075] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:09:46.845 [2024-11-17 13:19:35.868926] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:09:46.845 [2024-11-17 13:19:35.869102] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:09:46.845 pt3 00:09:46.845 [2024-11-17 13:19:35.870091] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:46.845 13:19:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.845 13:19:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:46.845 13:19:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:46.845 13:19:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:46.845 13:19:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:46.845 13:19:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:46.845 13:19:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:46.845 13:19:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:46.845 13:19:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:46.845 13:19:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:46.845 13:19:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:46.845 13:19:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:46.845 13:19:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:46.845 13:19:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.845 13:19:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.845 13:19:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.845 13:19:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:46.845 "name": "raid_bdev1", 00:09:46.845 "uuid": "4737b090-d45f-4802-868e-3c55e632948f", 00:09:46.845 "strip_size_kb": 0, 00:09:46.845 "state": "online", 00:09:46.845 "raid_level": "raid1", 00:09:46.845 "superblock": true, 00:09:46.845 "num_base_bdevs": 3, 00:09:46.845 "num_base_bdevs_discovered": 2, 00:09:46.845 "num_base_bdevs_operational": 2, 00:09:46.845 "base_bdevs_list": [ 00:09:46.845 { 00:09:46.845 "name": null, 00:09:46.845 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:46.845 "is_configured": false, 00:09:46.845 "data_offset": 2048, 00:09:46.845 "data_size": 63488 00:09:46.845 }, 00:09:46.845 { 00:09:46.845 "name": "pt2", 00:09:46.845 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:46.845 "is_configured": true, 00:09:46.845 "data_offset": 2048, 00:09:46.845 "data_size": 63488 00:09:46.845 }, 00:09:46.845 { 00:09:46.845 "name": "pt3", 00:09:46.845 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:46.845 "is_configured": true, 00:09:46.845 "data_offset": 2048, 00:09:46.845 "data_size": 63488 00:09:46.845 } 00:09:46.845 ] 00:09:46.845 }' 00:09:46.845 13:19:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:46.845 13:19:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.105 13:19:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:09:47.105 13:19:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:09:47.105 13:19:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.105 13:19:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.105 13:19:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.365 13:19:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:09:47.365 13:19:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:47.365 13:19:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:09:47.365 13:19:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.365 13:19:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.365 [2024-11-17 13:19:36.342418] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:47.365 13:19:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.365 13:19:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 4737b090-d45f-4802-868e-3c55e632948f '!=' 4737b090-d45f-4802-868e-3c55e632948f ']' 00:09:47.365 13:19:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 68568 00:09:47.365 13:19:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 68568 ']' 00:09:47.365 13:19:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 68568 00:09:47.365 13:19:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:09:47.365 13:19:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:47.365 13:19:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68568 00:09:47.365 killing process with pid 68568 00:09:47.365 13:19:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:47.365 13:19:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:47.365 13:19:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68568' 00:09:47.365 13:19:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 68568 00:09:47.365 [2024-11-17 13:19:36.422184] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:47.365 [2024-11-17 13:19:36.422296] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:47.365 [2024-11-17 13:19:36.422362] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:47.365 [2024-11-17 13:19:36.422375] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:09:47.365 13:19:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 68568 00:09:47.625 [2024-11-17 13:19:36.726922] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:49.005 13:19:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:09:49.005 00:09:49.005 real 0m7.872s 00:09:49.005 user 0m12.112s 00:09:49.005 sys 0m1.587s 00:09:49.005 ************************************ 00:09:49.005 END TEST raid_superblock_test 00:09:49.005 ************************************ 00:09:49.005 13:19:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:49.005 13:19:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.005 13:19:37 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 3 read 00:09:49.005 13:19:37 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:49.005 13:19:37 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:49.005 13:19:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:49.005 ************************************ 00:09:49.005 START TEST raid_read_error_test 00:09:49.005 ************************************ 00:09:49.005 13:19:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 3 read 00:09:49.005 13:19:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:09:49.005 13:19:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:49.005 13:19:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:09:49.005 13:19:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:49.005 13:19:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:49.005 13:19:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:49.005 13:19:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:49.005 13:19:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:49.005 13:19:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:49.005 13:19:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:49.005 13:19:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:49.005 13:19:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:49.005 13:19:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:49.005 13:19:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:49.005 13:19:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:49.005 13:19:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:49.005 13:19:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:49.005 13:19:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:49.005 13:19:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:49.005 13:19:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:49.005 13:19:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:49.005 13:19:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:09:49.005 13:19:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:09:49.005 13:19:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:49.005 13:19:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.Yr9G3rcMi8 00:09:49.005 13:19:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=69014 00:09:49.005 13:19:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:49.005 13:19:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 69014 00:09:49.005 13:19:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 69014 ']' 00:09:49.005 13:19:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:49.005 13:19:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:49.005 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:49.005 13:19:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:49.005 13:19:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:49.005 13:19:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.005 [2024-11-17 13:19:38.021374] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:09:49.005 [2024-11-17 13:19:38.021513] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69014 ] 00:09:49.005 [2024-11-17 13:19:38.193426] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:49.264 [2024-11-17 13:19:38.311720] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:49.524 [2024-11-17 13:19:38.513815] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:49.524 [2024-11-17 13:19:38.513849] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:49.785 13:19:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:49.785 13:19:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:49.785 13:19:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:49.785 13:19:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:49.785 13:19:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.785 13:19:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.785 BaseBdev1_malloc 00:09:49.785 13:19:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.785 13:19:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:49.785 13:19:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.785 13:19:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.785 true 00:09:49.785 13:19:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.785 13:19:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:49.785 13:19:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.785 13:19:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.785 [2024-11-17 13:19:38.924839] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:49.785 [2024-11-17 13:19:38.924896] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:49.785 [2024-11-17 13:19:38.924922] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:49.785 [2024-11-17 13:19:38.924937] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:49.785 [2024-11-17 13:19:38.927339] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:49.785 [2024-11-17 13:19:38.927385] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:49.785 BaseBdev1 00:09:49.785 13:19:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.785 13:19:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:49.785 13:19:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:49.785 13:19:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.785 13:19:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.785 BaseBdev2_malloc 00:09:49.785 13:19:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.785 13:19:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:49.785 13:19:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.785 13:19:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.785 true 00:09:49.785 13:19:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.785 13:19:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:49.785 13:19:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.785 13:19:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.785 [2024-11-17 13:19:38.979706] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:49.785 [2024-11-17 13:19:38.979764] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:49.785 [2024-11-17 13:19:38.979786] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:49.785 [2024-11-17 13:19:38.979800] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:49.785 [2024-11-17 13:19:38.981967] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:49.785 [2024-11-17 13:19:38.982054] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:49.785 BaseBdev2 00:09:49.785 13:19:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.785 13:19:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:49.785 13:19:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:49.785 13:19:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.785 13:19:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.045 BaseBdev3_malloc 00:09:50.046 13:19:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.046 13:19:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:50.046 13:19:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.046 13:19:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.046 true 00:09:50.046 13:19:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.046 13:19:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:50.046 13:19:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.046 13:19:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.046 [2024-11-17 13:19:39.045177] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:50.046 [2024-11-17 13:19:39.045302] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:50.046 [2024-11-17 13:19:39.045335] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:50.046 [2024-11-17 13:19:39.045350] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:50.046 [2024-11-17 13:19:39.047644] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:50.046 [2024-11-17 13:19:39.047689] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:50.046 BaseBdev3 00:09:50.046 13:19:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.046 13:19:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:50.046 13:19:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.046 13:19:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.046 [2024-11-17 13:19:39.053261] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:50.046 [2024-11-17 13:19:39.055101] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:50.046 [2024-11-17 13:19:39.055181] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:50.046 [2024-11-17 13:19:39.055440] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:50.046 [2024-11-17 13:19:39.055455] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:50.046 [2024-11-17 13:19:39.055737] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:09:50.046 [2024-11-17 13:19:39.055917] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:50.046 [2024-11-17 13:19:39.055931] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:50.046 [2024-11-17 13:19:39.056093] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:50.046 13:19:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.046 13:19:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:09:50.046 13:19:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:50.046 13:19:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:50.046 13:19:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:50.046 13:19:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:50.046 13:19:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:50.046 13:19:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:50.046 13:19:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:50.046 13:19:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:50.046 13:19:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:50.046 13:19:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:50.046 13:19:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:50.046 13:19:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.046 13:19:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.046 13:19:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.046 13:19:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:50.046 "name": "raid_bdev1", 00:09:50.046 "uuid": "62b34089-dcbe-4c27-910c-df44c3e75efe", 00:09:50.046 "strip_size_kb": 0, 00:09:50.046 "state": "online", 00:09:50.046 "raid_level": "raid1", 00:09:50.046 "superblock": true, 00:09:50.046 "num_base_bdevs": 3, 00:09:50.046 "num_base_bdevs_discovered": 3, 00:09:50.046 "num_base_bdevs_operational": 3, 00:09:50.046 "base_bdevs_list": [ 00:09:50.046 { 00:09:50.046 "name": "BaseBdev1", 00:09:50.046 "uuid": "2649ab70-6122-5aec-aa6d-4da387d22b00", 00:09:50.046 "is_configured": true, 00:09:50.046 "data_offset": 2048, 00:09:50.046 "data_size": 63488 00:09:50.046 }, 00:09:50.046 { 00:09:50.046 "name": "BaseBdev2", 00:09:50.046 "uuid": "783948b7-c81a-5857-acb7-49e71b3d8e42", 00:09:50.046 "is_configured": true, 00:09:50.046 "data_offset": 2048, 00:09:50.046 "data_size": 63488 00:09:50.046 }, 00:09:50.046 { 00:09:50.046 "name": "BaseBdev3", 00:09:50.046 "uuid": "a9aafbac-de2d-5214-bc71-680622f51d30", 00:09:50.046 "is_configured": true, 00:09:50.046 "data_offset": 2048, 00:09:50.046 "data_size": 63488 00:09:50.046 } 00:09:50.046 ] 00:09:50.046 }' 00:09:50.046 13:19:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:50.046 13:19:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.305 13:19:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:50.306 13:19:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:50.565 [2024-11-17 13:19:39.617725] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:09:51.500 13:19:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:51.500 13:19:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.500 13:19:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.500 13:19:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.500 13:19:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:51.500 13:19:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:09:51.500 13:19:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:09:51.500 13:19:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:51.500 13:19:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:09:51.500 13:19:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:51.500 13:19:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:51.500 13:19:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:51.500 13:19:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:51.500 13:19:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:51.500 13:19:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:51.500 13:19:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:51.500 13:19:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:51.500 13:19:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:51.500 13:19:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:51.500 13:19:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:51.500 13:19:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.500 13:19:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.500 13:19:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.500 13:19:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:51.500 "name": "raid_bdev1", 00:09:51.500 "uuid": "62b34089-dcbe-4c27-910c-df44c3e75efe", 00:09:51.500 "strip_size_kb": 0, 00:09:51.500 "state": "online", 00:09:51.500 "raid_level": "raid1", 00:09:51.500 "superblock": true, 00:09:51.500 "num_base_bdevs": 3, 00:09:51.500 "num_base_bdevs_discovered": 3, 00:09:51.500 "num_base_bdevs_operational": 3, 00:09:51.500 "base_bdevs_list": [ 00:09:51.500 { 00:09:51.500 "name": "BaseBdev1", 00:09:51.500 "uuid": "2649ab70-6122-5aec-aa6d-4da387d22b00", 00:09:51.500 "is_configured": true, 00:09:51.500 "data_offset": 2048, 00:09:51.500 "data_size": 63488 00:09:51.500 }, 00:09:51.500 { 00:09:51.500 "name": "BaseBdev2", 00:09:51.500 "uuid": "783948b7-c81a-5857-acb7-49e71b3d8e42", 00:09:51.500 "is_configured": true, 00:09:51.500 "data_offset": 2048, 00:09:51.500 "data_size": 63488 00:09:51.500 }, 00:09:51.500 { 00:09:51.500 "name": "BaseBdev3", 00:09:51.501 "uuid": "a9aafbac-de2d-5214-bc71-680622f51d30", 00:09:51.501 "is_configured": true, 00:09:51.501 "data_offset": 2048, 00:09:51.501 "data_size": 63488 00:09:51.501 } 00:09:51.501 ] 00:09:51.501 }' 00:09:51.501 13:19:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:51.501 13:19:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.068 13:19:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:52.068 13:19:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.068 13:19:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.068 [2024-11-17 13:19:40.998989] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:52.068 [2024-11-17 13:19:40.999024] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:52.068 [2024-11-17 13:19:41.001612] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:52.068 [2024-11-17 13:19:41.001666] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:52.068 [2024-11-17 13:19:41.001768] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:52.068 [2024-11-17 13:19:41.001778] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:52.068 { 00:09:52.068 "results": [ 00:09:52.068 { 00:09:52.068 "job": "raid_bdev1", 00:09:52.068 "core_mask": "0x1", 00:09:52.068 "workload": "randrw", 00:09:52.068 "percentage": 50, 00:09:52.068 "status": "finished", 00:09:52.068 "queue_depth": 1, 00:09:52.068 "io_size": 131072, 00:09:52.068 "runtime": 1.382038, 00:09:52.068 "iops": 12971.42336173101, 00:09:52.068 "mibps": 1621.4279202163762, 00:09:52.068 "io_failed": 0, 00:09:52.068 "io_timeout": 0, 00:09:52.068 "avg_latency_us": 74.41700072808624, 00:09:52.068 "min_latency_us": 23.252401746724892, 00:09:52.068 "max_latency_us": 1631.2454148471616 00:09:52.068 } 00:09:52.068 ], 00:09:52.068 "core_count": 1 00:09:52.068 } 00:09:52.068 13:19:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.068 13:19:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 69014 00:09:52.068 13:19:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 69014 ']' 00:09:52.068 13:19:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 69014 00:09:52.068 13:19:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:09:52.068 13:19:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:52.068 13:19:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69014 00:09:52.068 13:19:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:52.068 13:19:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:52.068 13:19:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69014' 00:09:52.068 killing process with pid 69014 00:09:52.068 13:19:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 69014 00:09:52.068 [2024-11-17 13:19:41.045870] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:52.068 13:19:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 69014 00:09:52.068 [2024-11-17 13:19:41.274868] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:53.446 13:19:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:53.446 13:19:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:53.446 13:19:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.Yr9G3rcMi8 00:09:53.446 13:19:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:09:53.446 13:19:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:09:53.446 13:19:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:53.446 13:19:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:53.446 13:19:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:09:53.446 00:09:53.446 real 0m4.508s 00:09:53.446 user 0m5.381s 00:09:53.446 sys 0m0.594s 00:09:53.446 13:19:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:53.446 ************************************ 00:09:53.446 END TEST raid_read_error_test 00:09:53.446 ************************************ 00:09:53.446 13:19:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.446 13:19:42 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 3 write 00:09:53.446 13:19:42 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:53.446 13:19:42 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:53.446 13:19:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:53.446 ************************************ 00:09:53.446 START TEST raid_write_error_test 00:09:53.446 ************************************ 00:09:53.446 13:19:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 3 write 00:09:53.446 13:19:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:09:53.446 13:19:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:53.446 13:19:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:53.446 13:19:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:53.446 13:19:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:53.446 13:19:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:53.446 13:19:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:53.446 13:19:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:53.446 13:19:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:53.446 13:19:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:53.446 13:19:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:53.446 13:19:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:53.446 13:19:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:53.446 13:19:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:53.446 13:19:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:53.446 13:19:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:53.446 13:19:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:53.446 13:19:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:53.446 13:19:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:53.446 13:19:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:53.446 13:19:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:53.446 13:19:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:09:53.446 13:19:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:09:53.446 13:19:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:53.446 13:19:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.9ZAmsU4Pi8 00:09:53.446 13:19:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=69154 00:09:53.446 13:19:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 69154 00:09:53.446 13:19:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:53.446 13:19:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 69154 ']' 00:09:53.446 13:19:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:53.446 13:19:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:53.446 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:53.446 13:19:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:53.446 13:19:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:53.446 13:19:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.446 [2024-11-17 13:19:42.625452] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:09:53.446 [2024-11-17 13:19:42.625607] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69154 ] 00:09:53.704 [2024-11-17 13:19:42.827279] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:53.962 [2024-11-17 13:19:42.946875] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:53.962 [2024-11-17 13:19:43.160675] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:53.962 [2024-11-17 13:19:43.160819] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:54.530 13:19:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:54.530 13:19:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:54.530 13:19:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:54.530 13:19:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:54.530 13:19:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.530 13:19:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.530 BaseBdev1_malloc 00:09:54.530 13:19:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.530 13:19:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:54.530 13:19:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.530 13:19:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.530 true 00:09:54.530 13:19:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.530 13:19:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:54.530 13:19:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.530 13:19:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.530 [2024-11-17 13:19:43.524149] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:54.530 [2024-11-17 13:19:43.524215] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:54.530 [2024-11-17 13:19:43.524237] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:54.530 [2024-11-17 13:19:43.524249] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:54.530 [2024-11-17 13:19:43.526300] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:54.530 [2024-11-17 13:19:43.526340] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:54.530 BaseBdev1 00:09:54.530 13:19:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.530 13:19:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:54.530 13:19:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:54.530 13:19:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.530 13:19:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.530 BaseBdev2_malloc 00:09:54.530 13:19:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.530 13:19:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:54.530 13:19:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.530 13:19:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.530 true 00:09:54.530 13:19:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.530 13:19:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:54.530 13:19:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.530 13:19:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.530 [2024-11-17 13:19:43.590305] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:54.530 [2024-11-17 13:19:43.590404] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:54.530 [2024-11-17 13:19:43.590425] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:54.530 [2024-11-17 13:19:43.590436] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:54.530 [2024-11-17 13:19:43.592454] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:54.530 [2024-11-17 13:19:43.592495] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:54.530 BaseBdev2 00:09:54.530 13:19:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.530 13:19:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:54.530 13:19:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:54.530 13:19:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.530 13:19:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.530 BaseBdev3_malloc 00:09:54.530 13:19:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.530 13:19:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:54.530 13:19:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.530 13:19:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.530 true 00:09:54.530 13:19:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.530 13:19:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:54.530 13:19:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.530 13:19:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.530 [2024-11-17 13:19:43.669689] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:54.530 [2024-11-17 13:19:43.669749] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:54.530 [2024-11-17 13:19:43.669769] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:54.531 [2024-11-17 13:19:43.669782] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:54.531 [2024-11-17 13:19:43.671936] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:54.531 [2024-11-17 13:19:43.671976] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:54.531 BaseBdev3 00:09:54.531 13:19:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.531 13:19:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:54.531 13:19:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.531 13:19:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.531 [2024-11-17 13:19:43.681748] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:54.531 [2024-11-17 13:19:43.683630] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:54.531 [2024-11-17 13:19:43.683705] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:54.531 [2024-11-17 13:19:43.683918] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:54.531 [2024-11-17 13:19:43.683930] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:54.531 [2024-11-17 13:19:43.684179] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:09:54.531 [2024-11-17 13:19:43.684346] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:54.531 [2024-11-17 13:19:43.684360] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:54.531 [2024-11-17 13:19:43.684502] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:54.531 13:19:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.531 13:19:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:09:54.531 13:19:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:54.531 13:19:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:54.531 13:19:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:54.531 13:19:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:54.531 13:19:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:54.531 13:19:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:54.531 13:19:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:54.531 13:19:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:54.531 13:19:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:54.531 13:19:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:54.531 13:19:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:54.531 13:19:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.531 13:19:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.531 13:19:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.531 13:19:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:54.531 "name": "raid_bdev1", 00:09:54.531 "uuid": "cd1e6421-28ed-42e4-ba27-6958b8ee0d78", 00:09:54.531 "strip_size_kb": 0, 00:09:54.531 "state": "online", 00:09:54.531 "raid_level": "raid1", 00:09:54.531 "superblock": true, 00:09:54.531 "num_base_bdevs": 3, 00:09:54.531 "num_base_bdevs_discovered": 3, 00:09:54.531 "num_base_bdevs_operational": 3, 00:09:54.531 "base_bdevs_list": [ 00:09:54.531 { 00:09:54.531 "name": "BaseBdev1", 00:09:54.531 "uuid": "1d228fe9-b180-5e33-a368-43100c21ce8f", 00:09:54.531 "is_configured": true, 00:09:54.531 "data_offset": 2048, 00:09:54.531 "data_size": 63488 00:09:54.531 }, 00:09:54.531 { 00:09:54.531 "name": "BaseBdev2", 00:09:54.531 "uuid": "6115f50e-0e2b-5e60-9fc5-491f49cebf07", 00:09:54.531 "is_configured": true, 00:09:54.531 "data_offset": 2048, 00:09:54.531 "data_size": 63488 00:09:54.531 }, 00:09:54.531 { 00:09:54.531 "name": "BaseBdev3", 00:09:54.531 "uuid": "09f648b5-c3c6-57b0-9168-c5bc710d1cf1", 00:09:54.531 "is_configured": true, 00:09:54.531 "data_offset": 2048, 00:09:54.531 "data_size": 63488 00:09:54.531 } 00:09:54.531 ] 00:09:54.531 }' 00:09:54.531 13:19:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:54.531 13:19:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.098 13:19:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:55.098 13:19:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:55.098 [2024-11-17 13:19:44.198370] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:09:56.034 13:19:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:56.034 13:19:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.034 13:19:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.034 [2024-11-17 13:19:45.121627] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:09:56.034 [2024-11-17 13:19:45.121761] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:56.034 [2024-11-17 13:19:45.122027] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006700 00:09:56.034 13:19:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.034 13:19:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:56.034 13:19:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:09:56.034 13:19:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:09:56.034 13:19:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:09:56.034 13:19:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:56.034 13:19:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:56.034 13:19:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:56.034 13:19:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:56.034 13:19:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:56.034 13:19:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:56.034 13:19:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:56.034 13:19:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:56.034 13:19:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:56.034 13:19:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:56.034 13:19:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:56.034 13:19:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:56.034 13:19:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.034 13:19:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.034 13:19:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.034 13:19:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:56.034 "name": "raid_bdev1", 00:09:56.035 "uuid": "cd1e6421-28ed-42e4-ba27-6958b8ee0d78", 00:09:56.035 "strip_size_kb": 0, 00:09:56.035 "state": "online", 00:09:56.035 "raid_level": "raid1", 00:09:56.035 "superblock": true, 00:09:56.035 "num_base_bdevs": 3, 00:09:56.035 "num_base_bdevs_discovered": 2, 00:09:56.035 "num_base_bdevs_operational": 2, 00:09:56.035 "base_bdevs_list": [ 00:09:56.035 { 00:09:56.035 "name": null, 00:09:56.035 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:56.035 "is_configured": false, 00:09:56.035 "data_offset": 0, 00:09:56.035 "data_size": 63488 00:09:56.035 }, 00:09:56.035 { 00:09:56.035 "name": "BaseBdev2", 00:09:56.035 "uuid": "6115f50e-0e2b-5e60-9fc5-491f49cebf07", 00:09:56.035 "is_configured": true, 00:09:56.035 "data_offset": 2048, 00:09:56.035 "data_size": 63488 00:09:56.035 }, 00:09:56.035 { 00:09:56.035 "name": "BaseBdev3", 00:09:56.035 "uuid": "09f648b5-c3c6-57b0-9168-c5bc710d1cf1", 00:09:56.035 "is_configured": true, 00:09:56.035 "data_offset": 2048, 00:09:56.035 "data_size": 63488 00:09:56.035 } 00:09:56.035 ] 00:09:56.035 }' 00:09:56.035 13:19:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:56.035 13:19:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.601 13:19:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:56.601 13:19:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.601 13:19:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.601 [2024-11-17 13:19:45.568128] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:56.601 [2024-11-17 13:19:45.568262] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:56.602 [2024-11-17 13:19:45.570941] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:56.602 [2024-11-17 13:19:45.571059] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:56.602 [2024-11-17 13:19:45.571184] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:56.602 [2024-11-17 13:19:45.571258] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:56.602 { 00:09:56.602 "results": [ 00:09:56.602 { 00:09:56.602 "job": "raid_bdev1", 00:09:56.602 "core_mask": "0x1", 00:09:56.602 "workload": "randrw", 00:09:56.602 "percentage": 50, 00:09:56.602 "status": "finished", 00:09:56.602 "queue_depth": 1, 00:09:56.602 "io_size": 131072, 00:09:56.602 "runtime": 1.370578, 00:09:56.602 "iops": 14972.51524539282, 00:09:56.602 "mibps": 1871.5644056741025, 00:09:56.602 "io_failed": 0, 00:09:56.602 "io_timeout": 0, 00:09:56.602 "avg_latency_us": 64.23386587262084, 00:09:56.602 "min_latency_us": 23.699563318777294, 00:09:56.602 "max_latency_us": 1423.7624454148472 00:09:56.602 } 00:09:56.602 ], 00:09:56.602 "core_count": 1 00:09:56.602 } 00:09:56.602 13:19:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.602 13:19:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 69154 00:09:56.602 13:19:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 69154 ']' 00:09:56.602 13:19:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 69154 00:09:56.602 13:19:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:09:56.602 13:19:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:56.602 13:19:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69154 00:09:56.602 killing process with pid 69154 00:09:56.602 13:19:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:56.602 13:19:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:56.602 13:19:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69154' 00:09:56.602 13:19:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 69154 00:09:56.602 [2024-11-17 13:19:45.609646] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:56.602 13:19:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 69154 00:09:56.860 [2024-11-17 13:19:45.841094] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:57.818 13:19:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.9ZAmsU4Pi8 00:09:57.818 13:19:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:57.818 13:19:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:57.818 ************************************ 00:09:57.818 END TEST raid_write_error_test 00:09:57.818 ************************************ 00:09:57.818 13:19:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:09:57.818 13:19:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:09:57.818 13:19:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:57.818 13:19:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:57.818 13:19:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:09:57.818 00:09:57.818 real 0m4.520s 00:09:57.818 user 0m5.329s 00:09:57.818 sys 0m0.592s 00:09:57.818 13:19:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:57.818 13:19:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.077 13:19:47 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:09:58.077 13:19:47 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:58.077 13:19:47 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:09:58.077 13:19:47 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:58.077 13:19:47 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:58.077 13:19:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:58.077 ************************************ 00:09:58.077 START TEST raid_state_function_test 00:09:58.077 ************************************ 00:09:58.077 13:19:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 4 false 00:09:58.077 13:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:09:58.077 13:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:09:58.077 13:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:58.077 13:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:58.077 13:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:58.077 13:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:58.077 13:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:58.077 13:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:58.077 13:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:58.077 13:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:58.077 13:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:58.077 13:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:58.077 13:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:58.077 13:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:58.077 13:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:58.077 13:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:09:58.077 13:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:58.077 13:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:58.077 13:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:09:58.077 13:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:58.077 13:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:58.077 13:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:58.077 13:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:58.077 13:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:58.077 13:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:09:58.077 13:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:58.077 13:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:58.077 13:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:58.077 13:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:58.077 Process raid pid: 69298 00:09:58.077 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:58.077 13:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=69298 00:09:58.077 13:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 69298' 00:09:58.077 13:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 69298 00:09:58.077 13:19:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 69298 ']' 00:09:58.077 13:19:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:58.077 13:19:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:58.077 13:19:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:58.077 13:19:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:58.077 13:19:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.077 13:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:58.077 [2024-11-17 13:19:47.178198] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:09:58.077 [2024-11-17 13:19:47.178340] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:58.337 [2024-11-17 13:19:47.356992] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:58.337 [2024-11-17 13:19:47.470653] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:58.594 [2024-11-17 13:19:47.668743] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:58.594 [2024-11-17 13:19:47.668780] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:58.852 13:19:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:58.852 13:19:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:09:58.852 13:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:58.852 13:19:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.852 13:19:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.852 [2024-11-17 13:19:48.015330] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:58.852 [2024-11-17 13:19:48.015425] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:58.852 [2024-11-17 13:19:48.015440] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:58.852 [2024-11-17 13:19:48.015450] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:58.852 [2024-11-17 13:19:48.015456] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:58.852 [2024-11-17 13:19:48.015465] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:58.852 [2024-11-17 13:19:48.015471] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:58.852 [2024-11-17 13:19:48.015479] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:58.852 13:19:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.852 13:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:58.852 13:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:58.853 13:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:58.853 13:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:58.853 13:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:58.853 13:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:58.853 13:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:58.853 13:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:58.853 13:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:58.853 13:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:58.853 13:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:58.853 13:19:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.853 13:19:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.853 13:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:58.853 13:19:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.853 13:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:58.853 "name": "Existed_Raid", 00:09:58.853 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:58.853 "strip_size_kb": 64, 00:09:58.853 "state": "configuring", 00:09:58.853 "raid_level": "raid0", 00:09:58.853 "superblock": false, 00:09:58.853 "num_base_bdevs": 4, 00:09:58.853 "num_base_bdevs_discovered": 0, 00:09:58.853 "num_base_bdevs_operational": 4, 00:09:58.853 "base_bdevs_list": [ 00:09:58.853 { 00:09:58.853 "name": "BaseBdev1", 00:09:58.853 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:58.853 "is_configured": false, 00:09:58.853 "data_offset": 0, 00:09:58.853 "data_size": 0 00:09:58.853 }, 00:09:58.853 { 00:09:58.853 "name": "BaseBdev2", 00:09:58.853 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:58.853 "is_configured": false, 00:09:58.853 "data_offset": 0, 00:09:58.853 "data_size": 0 00:09:58.853 }, 00:09:58.853 { 00:09:58.853 "name": "BaseBdev3", 00:09:58.853 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:58.853 "is_configured": false, 00:09:58.853 "data_offset": 0, 00:09:58.853 "data_size": 0 00:09:58.853 }, 00:09:58.853 { 00:09:58.853 "name": "BaseBdev4", 00:09:58.853 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:58.853 "is_configured": false, 00:09:58.853 "data_offset": 0, 00:09:58.853 "data_size": 0 00:09:58.853 } 00:09:58.853 ] 00:09:58.853 }' 00:09:58.853 13:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:58.853 13:19:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.422 13:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:59.422 13:19:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.422 13:19:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.422 [2024-11-17 13:19:48.450524] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:59.422 [2024-11-17 13:19:48.450604] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:59.422 13:19:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.422 13:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:59.422 13:19:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.422 13:19:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.422 [2024-11-17 13:19:48.458488] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:59.422 [2024-11-17 13:19:48.458528] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:59.422 [2024-11-17 13:19:48.458538] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:59.422 [2024-11-17 13:19:48.458547] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:59.422 [2024-11-17 13:19:48.458553] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:59.422 [2024-11-17 13:19:48.458562] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:59.422 [2024-11-17 13:19:48.458567] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:59.422 [2024-11-17 13:19:48.458575] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:59.422 13:19:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.422 13:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:59.422 13:19:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.422 13:19:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.422 BaseBdev1 00:09:59.422 [2024-11-17 13:19:48.501869] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:59.422 13:19:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.422 13:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:59.422 13:19:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:59.422 13:19:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:59.422 13:19:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:59.422 13:19:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:59.422 13:19:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:59.422 13:19:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:59.422 13:19:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.422 13:19:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.423 13:19:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.423 13:19:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:59.423 13:19:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.423 13:19:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.423 [ 00:09:59.423 { 00:09:59.423 "name": "BaseBdev1", 00:09:59.423 "aliases": [ 00:09:59.423 "70ba3e89-358c-4333-954f-d01b7a2b83c2" 00:09:59.423 ], 00:09:59.423 "product_name": "Malloc disk", 00:09:59.423 "block_size": 512, 00:09:59.423 "num_blocks": 65536, 00:09:59.423 "uuid": "70ba3e89-358c-4333-954f-d01b7a2b83c2", 00:09:59.423 "assigned_rate_limits": { 00:09:59.423 "rw_ios_per_sec": 0, 00:09:59.423 "rw_mbytes_per_sec": 0, 00:09:59.423 "r_mbytes_per_sec": 0, 00:09:59.423 "w_mbytes_per_sec": 0 00:09:59.423 }, 00:09:59.423 "claimed": true, 00:09:59.423 "claim_type": "exclusive_write", 00:09:59.423 "zoned": false, 00:09:59.423 "supported_io_types": { 00:09:59.423 "read": true, 00:09:59.423 "write": true, 00:09:59.423 "unmap": true, 00:09:59.423 "flush": true, 00:09:59.423 "reset": true, 00:09:59.423 "nvme_admin": false, 00:09:59.423 "nvme_io": false, 00:09:59.423 "nvme_io_md": false, 00:09:59.423 "write_zeroes": true, 00:09:59.423 "zcopy": true, 00:09:59.423 "get_zone_info": false, 00:09:59.423 "zone_management": false, 00:09:59.423 "zone_append": false, 00:09:59.423 "compare": false, 00:09:59.423 "compare_and_write": false, 00:09:59.423 "abort": true, 00:09:59.423 "seek_hole": false, 00:09:59.423 "seek_data": false, 00:09:59.423 "copy": true, 00:09:59.423 "nvme_iov_md": false 00:09:59.423 }, 00:09:59.423 "memory_domains": [ 00:09:59.423 { 00:09:59.423 "dma_device_id": "system", 00:09:59.423 "dma_device_type": 1 00:09:59.423 }, 00:09:59.423 { 00:09:59.423 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:59.423 "dma_device_type": 2 00:09:59.423 } 00:09:59.423 ], 00:09:59.423 "driver_specific": {} 00:09:59.423 } 00:09:59.423 ] 00:09:59.423 13:19:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.423 13:19:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:59.423 13:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:59.423 13:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:59.423 13:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:59.423 13:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:59.423 13:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:59.423 13:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:59.423 13:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:59.423 13:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:59.423 13:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:59.423 13:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:59.423 13:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:59.423 13:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:59.423 13:19:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.423 13:19:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.423 13:19:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.423 13:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:59.423 "name": "Existed_Raid", 00:09:59.423 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:59.423 "strip_size_kb": 64, 00:09:59.423 "state": "configuring", 00:09:59.423 "raid_level": "raid0", 00:09:59.423 "superblock": false, 00:09:59.423 "num_base_bdevs": 4, 00:09:59.423 "num_base_bdevs_discovered": 1, 00:09:59.423 "num_base_bdevs_operational": 4, 00:09:59.423 "base_bdevs_list": [ 00:09:59.423 { 00:09:59.423 "name": "BaseBdev1", 00:09:59.423 "uuid": "70ba3e89-358c-4333-954f-d01b7a2b83c2", 00:09:59.423 "is_configured": true, 00:09:59.423 "data_offset": 0, 00:09:59.423 "data_size": 65536 00:09:59.423 }, 00:09:59.423 { 00:09:59.423 "name": "BaseBdev2", 00:09:59.423 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:59.423 "is_configured": false, 00:09:59.423 "data_offset": 0, 00:09:59.423 "data_size": 0 00:09:59.423 }, 00:09:59.423 { 00:09:59.423 "name": "BaseBdev3", 00:09:59.423 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:59.423 "is_configured": false, 00:09:59.423 "data_offset": 0, 00:09:59.423 "data_size": 0 00:09:59.423 }, 00:09:59.423 { 00:09:59.423 "name": "BaseBdev4", 00:09:59.423 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:59.423 "is_configured": false, 00:09:59.423 "data_offset": 0, 00:09:59.423 "data_size": 0 00:09:59.423 } 00:09:59.423 ] 00:09:59.423 }' 00:09:59.423 13:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:59.423 13:19:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.992 13:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:59.992 13:19:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.992 13:19:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.992 [2024-11-17 13:19:48.957133] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:59.992 [2024-11-17 13:19:48.957254] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:59.992 13:19:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.992 13:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:59.992 13:19:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.992 13:19:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.992 [2024-11-17 13:19:48.965169] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:59.992 [2024-11-17 13:19:48.967196] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:59.992 [2024-11-17 13:19:48.967307] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:59.992 [2024-11-17 13:19:48.967339] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:59.992 [2024-11-17 13:19:48.967366] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:59.992 [2024-11-17 13:19:48.967385] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:59.992 [2024-11-17 13:19:48.967406] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:59.992 13:19:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.992 13:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:59.992 13:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:59.992 13:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:59.992 13:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:59.992 13:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:59.992 13:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:59.992 13:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:59.992 13:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:59.992 13:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:59.992 13:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:59.992 13:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:59.992 13:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:59.992 13:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:59.992 13:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:59.992 13:19:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.992 13:19:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.992 13:19:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.992 13:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:59.992 "name": "Existed_Raid", 00:09:59.992 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:59.992 "strip_size_kb": 64, 00:09:59.992 "state": "configuring", 00:09:59.992 "raid_level": "raid0", 00:09:59.992 "superblock": false, 00:09:59.992 "num_base_bdevs": 4, 00:09:59.992 "num_base_bdevs_discovered": 1, 00:09:59.992 "num_base_bdevs_operational": 4, 00:09:59.992 "base_bdevs_list": [ 00:09:59.992 { 00:09:59.992 "name": "BaseBdev1", 00:09:59.992 "uuid": "70ba3e89-358c-4333-954f-d01b7a2b83c2", 00:09:59.992 "is_configured": true, 00:09:59.992 "data_offset": 0, 00:09:59.992 "data_size": 65536 00:09:59.992 }, 00:09:59.992 { 00:09:59.992 "name": "BaseBdev2", 00:09:59.992 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:59.992 "is_configured": false, 00:09:59.992 "data_offset": 0, 00:09:59.992 "data_size": 0 00:09:59.992 }, 00:09:59.992 { 00:09:59.992 "name": "BaseBdev3", 00:09:59.992 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:59.992 "is_configured": false, 00:09:59.992 "data_offset": 0, 00:09:59.992 "data_size": 0 00:09:59.992 }, 00:09:59.992 { 00:09:59.992 "name": "BaseBdev4", 00:09:59.992 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:59.992 "is_configured": false, 00:09:59.992 "data_offset": 0, 00:09:59.992 "data_size": 0 00:09:59.992 } 00:09:59.992 ] 00:09:59.992 }' 00:09:59.992 13:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:59.992 13:19:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.252 13:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:00.252 13:19:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.252 13:19:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.252 [2024-11-17 13:19:49.442133] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:00.252 BaseBdev2 00:10:00.252 13:19:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.252 13:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:00.252 13:19:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:00.252 13:19:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:00.252 13:19:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:00.252 13:19:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:00.252 13:19:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:00.252 13:19:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:00.252 13:19:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.252 13:19:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.252 13:19:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.252 13:19:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:00.252 13:19:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.252 13:19:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.252 [ 00:10:00.252 { 00:10:00.252 "name": "BaseBdev2", 00:10:00.252 "aliases": [ 00:10:00.252 "8e0294a5-a1e4-4241-99e1-5c58a517e91a" 00:10:00.252 ], 00:10:00.252 "product_name": "Malloc disk", 00:10:00.252 "block_size": 512, 00:10:00.252 "num_blocks": 65536, 00:10:00.252 "uuid": "8e0294a5-a1e4-4241-99e1-5c58a517e91a", 00:10:00.252 "assigned_rate_limits": { 00:10:00.252 "rw_ios_per_sec": 0, 00:10:00.252 "rw_mbytes_per_sec": 0, 00:10:00.252 "r_mbytes_per_sec": 0, 00:10:00.252 "w_mbytes_per_sec": 0 00:10:00.252 }, 00:10:00.252 "claimed": true, 00:10:00.252 "claim_type": "exclusive_write", 00:10:00.252 "zoned": false, 00:10:00.252 "supported_io_types": { 00:10:00.252 "read": true, 00:10:00.252 "write": true, 00:10:00.252 "unmap": true, 00:10:00.252 "flush": true, 00:10:00.252 "reset": true, 00:10:00.252 "nvme_admin": false, 00:10:00.252 "nvme_io": false, 00:10:00.252 "nvme_io_md": false, 00:10:00.252 "write_zeroes": true, 00:10:00.252 "zcopy": true, 00:10:00.252 "get_zone_info": false, 00:10:00.252 "zone_management": false, 00:10:00.252 "zone_append": false, 00:10:00.252 "compare": false, 00:10:00.252 "compare_and_write": false, 00:10:00.252 "abort": true, 00:10:00.252 "seek_hole": false, 00:10:00.252 "seek_data": false, 00:10:00.252 "copy": true, 00:10:00.252 "nvme_iov_md": false 00:10:00.252 }, 00:10:00.252 "memory_domains": [ 00:10:00.252 { 00:10:00.252 "dma_device_id": "system", 00:10:00.252 "dma_device_type": 1 00:10:00.252 }, 00:10:00.252 { 00:10:00.252 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:00.252 "dma_device_type": 2 00:10:00.252 } 00:10:00.252 ], 00:10:00.252 "driver_specific": {} 00:10:00.252 } 00:10:00.252 ] 00:10:00.252 13:19:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.252 13:19:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:00.252 13:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:00.252 13:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:00.252 13:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:00.252 13:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:00.252 13:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:00.252 13:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:00.252 13:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:00.252 13:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:00.252 13:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:00.252 13:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:00.252 13:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:00.252 13:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:00.252 13:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:00.252 13:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:00.252 13:19:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.252 13:19:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.511 13:19:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.511 13:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:00.511 "name": "Existed_Raid", 00:10:00.511 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:00.511 "strip_size_kb": 64, 00:10:00.511 "state": "configuring", 00:10:00.511 "raid_level": "raid0", 00:10:00.511 "superblock": false, 00:10:00.511 "num_base_bdevs": 4, 00:10:00.511 "num_base_bdevs_discovered": 2, 00:10:00.511 "num_base_bdevs_operational": 4, 00:10:00.511 "base_bdevs_list": [ 00:10:00.511 { 00:10:00.511 "name": "BaseBdev1", 00:10:00.511 "uuid": "70ba3e89-358c-4333-954f-d01b7a2b83c2", 00:10:00.511 "is_configured": true, 00:10:00.511 "data_offset": 0, 00:10:00.511 "data_size": 65536 00:10:00.511 }, 00:10:00.511 { 00:10:00.511 "name": "BaseBdev2", 00:10:00.511 "uuid": "8e0294a5-a1e4-4241-99e1-5c58a517e91a", 00:10:00.511 "is_configured": true, 00:10:00.511 "data_offset": 0, 00:10:00.511 "data_size": 65536 00:10:00.511 }, 00:10:00.511 { 00:10:00.511 "name": "BaseBdev3", 00:10:00.511 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:00.511 "is_configured": false, 00:10:00.511 "data_offset": 0, 00:10:00.511 "data_size": 0 00:10:00.511 }, 00:10:00.511 { 00:10:00.511 "name": "BaseBdev4", 00:10:00.511 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:00.511 "is_configured": false, 00:10:00.511 "data_offset": 0, 00:10:00.511 "data_size": 0 00:10:00.511 } 00:10:00.511 ] 00:10:00.511 }' 00:10:00.511 13:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:00.511 13:19:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.770 13:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:00.770 13:19:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.770 13:19:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.770 [2024-11-17 13:19:49.955967] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:00.770 BaseBdev3 00:10:00.770 13:19:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.770 13:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:00.770 13:19:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:00.770 13:19:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:00.770 13:19:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:00.770 13:19:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:00.770 13:19:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:00.770 13:19:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:00.770 13:19:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.770 13:19:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.770 13:19:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.770 13:19:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:00.770 13:19:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.770 13:19:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.770 [ 00:10:00.770 { 00:10:00.770 "name": "BaseBdev3", 00:10:00.770 "aliases": [ 00:10:00.770 "e8532ff9-2403-470e-89ed-658d430b8b8a" 00:10:00.770 ], 00:10:00.770 "product_name": "Malloc disk", 00:10:00.770 "block_size": 512, 00:10:00.770 "num_blocks": 65536, 00:10:00.770 "uuid": "e8532ff9-2403-470e-89ed-658d430b8b8a", 00:10:00.770 "assigned_rate_limits": { 00:10:00.770 "rw_ios_per_sec": 0, 00:10:00.770 "rw_mbytes_per_sec": 0, 00:10:00.770 "r_mbytes_per_sec": 0, 00:10:00.770 "w_mbytes_per_sec": 0 00:10:00.770 }, 00:10:00.770 "claimed": true, 00:10:00.770 "claim_type": "exclusive_write", 00:10:00.770 "zoned": false, 00:10:00.770 "supported_io_types": { 00:10:00.770 "read": true, 00:10:00.770 "write": true, 00:10:00.770 "unmap": true, 00:10:00.770 "flush": true, 00:10:00.770 "reset": true, 00:10:00.770 "nvme_admin": false, 00:10:00.770 "nvme_io": false, 00:10:00.770 "nvme_io_md": false, 00:10:00.770 "write_zeroes": true, 00:10:00.770 "zcopy": true, 00:10:00.770 "get_zone_info": false, 00:10:00.770 "zone_management": false, 00:10:00.770 "zone_append": false, 00:10:00.770 "compare": false, 00:10:00.771 "compare_and_write": false, 00:10:00.771 "abort": true, 00:10:00.771 "seek_hole": false, 00:10:00.771 "seek_data": false, 00:10:00.771 "copy": true, 00:10:00.771 "nvme_iov_md": false 00:10:00.771 }, 00:10:00.771 "memory_domains": [ 00:10:00.771 { 00:10:00.771 "dma_device_id": "system", 00:10:00.771 "dma_device_type": 1 00:10:00.771 }, 00:10:00.771 { 00:10:00.771 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:00.771 "dma_device_type": 2 00:10:00.771 } 00:10:00.771 ], 00:10:00.771 "driver_specific": {} 00:10:00.771 } 00:10:00.771 ] 00:10:00.771 13:19:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.771 13:19:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:00.771 13:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:00.771 13:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:00.771 13:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:00.771 13:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:00.771 13:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:00.771 13:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:00.771 13:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:00.771 13:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:00.771 13:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:00.771 13:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:00.771 13:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:00.771 13:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:00.771 13:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:00.771 13:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:00.771 13:19:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.771 13:19:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.030 13:19:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.030 13:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:01.030 "name": "Existed_Raid", 00:10:01.030 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:01.030 "strip_size_kb": 64, 00:10:01.030 "state": "configuring", 00:10:01.030 "raid_level": "raid0", 00:10:01.030 "superblock": false, 00:10:01.030 "num_base_bdevs": 4, 00:10:01.030 "num_base_bdevs_discovered": 3, 00:10:01.030 "num_base_bdevs_operational": 4, 00:10:01.030 "base_bdevs_list": [ 00:10:01.030 { 00:10:01.030 "name": "BaseBdev1", 00:10:01.030 "uuid": "70ba3e89-358c-4333-954f-d01b7a2b83c2", 00:10:01.030 "is_configured": true, 00:10:01.030 "data_offset": 0, 00:10:01.030 "data_size": 65536 00:10:01.030 }, 00:10:01.030 { 00:10:01.030 "name": "BaseBdev2", 00:10:01.030 "uuid": "8e0294a5-a1e4-4241-99e1-5c58a517e91a", 00:10:01.030 "is_configured": true, 00:10:01.030 "data_offset": 0, 00:10:01.030 "data_size": 65536 00:10:01.030 }, 00:10:01.030 { 00:10:01.030 "name": "BaseBdev3", 00:10:01.030 "uuid": "e8532ff9-2403-470e-89ed-658d430b8b8a", 00:10:01.030 "is_configured": true, 00:10:01.030 "data_offset": 0, 00:10:01.030 "data_size": 65536 00:10:01.030 }, 00:10:01.030 { 00:10:01.030 "name": "BaseBdev4", 00:10:01.030 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:01.030 "is_configured": false, 00:10:01.030 "data_offset": 0, 00:10:01.030 "data_size": 0 00:10:01.030 } 00:10:01.030 ] 00:10:01.030 }' 00:10:01.030 13:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:01.030 13:19:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.289 13:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:01.289 13:19:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.289 13:19:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.289 [2024-11-17 13:19:50.453570] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:01.289 [2024-11-17 13:19:50.453697] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:01.289 [2024-11-17 13:19:50.453725] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:10:01.289 [2024-11-17 13:19:50.454059] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:01.289 [2024-11-17 13:19:50.454296] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:01.289 [2024-11-17 13:19:50.454347] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:01.289 [2024-11-17 13:19:50.454680] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:01.289 BaseBdev4 00:10:01.289 13:19:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.289 13:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:01.289 13:19:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:01.289 13:19:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:01.289 13:19:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:01.289 13:19:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:01.289 13:19:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:01.289 13:19:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:01.289 13:19:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.289 13:19:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.289 13:19:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.289 13:19:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:01.289 13:19:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.289 13:19:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.289 [ 00:10:01.289 { 00:10:01.289 "name": "BaseBdev4", 00:10:01.289 "aliases": [ 00:10:01.289 "4c51b7b3-3db9-41c2-b147-e4a1037be723" 00:10:01.289 ], 00:10:01.289 "product_name": "Malloc disk", 00:10:01.289 "block_size": 512, 00:10:01.289 "num_blocks": 65536, 00:10:01.289 "uuid": "4c51b7b3-3db9-41c2-b147-e4a1037be723", 00:10:01.289 "assigned_rate_limits": { 00:10:01.289 "rw_ios_per_sec": 0, 00:10:01.289 "rw_mbytes_per_sec": 0, 00:10:01.289 "r_mbytes_per_sec": 0, 00:10:01.289 "w_mbytes_per_sec": 0 00:10:01.289 }, 00:10:01.289 "claimed": true, 00:10:01.289 "claim_type": "exclusive_write", 00:10:01.289 "zoned": false, 00:10:01.289 "supported_io_types": { 00:10:01.289 "read": true, 00:10:01.289 "write": true, 00:10:01.289 "unmap": true, 00:10:01.289 "flush": true, 00:10:01.289 "reset": true, 00:10:01.289 "nvme_admin": false, 00:10:01.289 "nvme_io": false, 00:10:01.289 "nvme_io_md": false, 00:10:01.289 "write_zeroes": true, 00:10:01.289 "zcopy": true, 00:10:01.289 "get_zone_info": false, 00:10:01.289 "zone_management": false, 00:10:01.289 "zone_append": false, 00:10:01.289 "compare": false, 00:10:01.289 "compare_and_write": false, 00:10:01.289 "abort": true, 00:10:01.289 "seek_hole": false, 00:10:01.289 "seek_data": false, 00:10:01.289 "copy": true, 00:10:01.289 "nvme_iov_md": false 00:10:01.289 }, 00:10:01.289 "memory_domains": [ 00:10:01.289 { 00:10:01.289 "dma_device_id": "system", 00:10:01.289 "dma_device_type": 1 00:10:01.289 }, 00:10:01.289 { 00:10:01.289 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:01.289 "dma_device_type": 2 00:10:01.289 } 00:10:01.289 ], 00:10:01.289 "driver_specific": {} 00:10:01.289 } 00:10:01.289 ] 00:10:01.289 13:19:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.289 13:19:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:01.289 13:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:01.289 13:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:01.289 13:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:10:01.289 13:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:01.289 13:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:01.289 13:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:01.289 13:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:01.289 13:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:01.289 13:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:01.289 13:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:01.289 13:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:01.289 13:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:01.289 13:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:01.289 13:19:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.289 13:19:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.289 13:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:01.289 13:19:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.548 13:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:01.548 "name": "Existed_Raid", 00:10:01.548 "uuid": "8962d769-fe7f-4f70-abf6-5aeedc0f1e6b", 00:10:01.548 "strip_size_kb": 64, 00:10:01.548 "state": "online", 00:10:01.548 "raid_level": "raid0", 00:10:01.548 "superblock": false, 00:10:01.548 "num_base_bdevs": 4, 00:10:01.548 "num_base_bdevs_discovered": 4, 00:10:01.548 "num_base_bdevs_operational": 4, 00:10:01.548 "base_bdevs_list": [ 00:10:01.548 { 00:10:01.548 "name": "BaseBdev1", 00:10:01.548 "uuid": "70ba3e89-358c-4333-954f-d01b7a2b83c2", 00:10:01.548 "is_configured": true, 00:10:01.548 "data_offset": 0, 00:10:01.548 "data_size": 65536 00:10:01.548 }, 00:10:01.548 { 00:10:01.548 "name": "BaseBdev2", 00:10:01.548 "uuid": "8e0294a5-a1e4-4241-99e1-5c58a517e91a", 00:10:01.548 "is_configured": true, 00:10:01.548 "data_offset": 0, 00:10:01.548 "data_size": 65536 00:10:01.548 }, 00:10:01.548 { 00:10:01.548 "name": "BaseBdev3", 00:10:01.548 "uuid": "e8532ff9-2403-470e-89ed-658d430b8b8a", 00:10:01.548 "is_configured": true, 00:10:01.548 "data_offset": 0, 00:10:01.548 "data_size": 65536 00:10:01.548 }, 00:10:01.548 { 00:10:01.548 "name": "BaseBdev4", 00:10:01.548 "uuid": "4c51b7b3-3db9-41c2-b147-e4a1037be723", 00:10:01.548 "is_configured": true, 00:10:01.548 "data_offset": 0, 00:10:01.548 "data_size": 65536 00:10:01.548 } 00:10:01.548 ] 00:10:01.548 }' 00:10:01.548 13:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:01.548 13:19:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.808 13:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:01.808 13:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:01.808 13:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:01.808 13:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:01.808 13:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:01.808 13:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:01.808 13:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:01.808 13:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:01.808 13:19:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.808 13:19:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.808 [2024-11-17 13:19:50.913250] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:01.808 13:19:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.808 13:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:01.808 "name": "Existed_Raid", 00:10:01.808 "aliases": [ 00:10:01.808 "8962d769-fe7f-4f70-abf6-5aeedc0f1e6b" 00:10:01.808 ], 00:10:01.808 "product_name": "Raid Volume", 00:10:01.808 "block_size": 512, 00:10:01.808 "num_blocks": 262144, 00:10:01.808 "uuid": "8962d769-fe7f-4f70-abf6-5aeedc0f1e6b", 00:10:01.808 "assigned_rate_limits": { 00:10:01.808 "rw_ios_per_sec": 0, 00:10:01.808 "rw_mbytes_per_sec": 0, 00:10:01.808 "r_mbytes_per_sec": 0, 00:10:01.808 "w_mbytes_per_sec": 0 00:10:01.808 }, 00:10:01.808 "claimed": false, 00:10:01.808 "zoned": false, 00:10:01.808 "supported_io_types": { 00:10:01.808 "read": true, 00:10:01.808 "write": true, 00:10:01.808 "unmap": true, 00:10:01.808 "flush": true, 00:10:01.808 "reset": true, 00:10:01.808 "nvme_admin": false, 00:10:01.808 "nvme_io": false, 00:10:01.808 "nvme_io_md": false, 00:10:01.808 "write_zeroes": true, 00:10:01.808 "zcopy": false, 00:10:01.808 "get_zone_info": false, 00:10:01.808 "zone_management": false, 00:10:01.808 "zone_append": false, 00:10:01.808 "compare": false, 00:10:01.808 "compare_and_write": false, 00:10:01.808 "abort": false, 00:10:01.808 "seek_hole": false, 00:10:01.808 "seek_data": false, 00:10:01.808 "copy": false, 00:10:01.808 "nvme_iov_md": false 00:10:01.808 }, 00:10:01.808 "memory_domains": [ 00:10:01.808 { 00:10:01.808 "dma_device_id": "system", 00:10:01.808 "dma_device_type": 1 00:10:01.808 }, 00:10:01.808 { 00:10:01.808 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:01.808 "dma_device_type": 2 00:10:01.808 }, 00:10:01.808 { 00:10:01.808 "dma_device_id": "system", 00:10:01.808 "dma_device_type": 1 00:10:01.808 }, 00:10:01.808 { 00:10:01.808 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:01.808 "dma_device_type": 2 00:10:01.808 }, 00:10:01.808 { 00:10:01.808 "dma_device_id": "system", 00:10:01.808 "dma_device_type": 1 00:10:01.808 }, 00:10:01.808 { 00:10:01.808 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:01.808 "dma_device_type": 2 00:10:01.808 }, 00:10:01.808 { 00:10:01.808 "dma_device_id": "system", 00:10:01.808 "dma_device_type": 1 00:10:01.808 }, 00:10:01.808 { 00:10:01.808 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:01.808 "dma_device_type": 2 00:10:01.808 } 00:10:01.808 ], 00:10:01.808 "driver_specific": { 00:10:01.808 "raid": { 00:10:01.808 "uuid": "8962d769-fe7f-4f70-abf6-5aeedc0f1e6b", 00:10:01.808 "strip_size_kb": 64, 00:10:01.808 "state": "online", 00:10:01.808 "raid_level": "raid0", 00:10:01.808 "superblock": false, 00:10:01.808 "num_base_bdevs": 4, 00:10:01.808 "num_base_bdevs_discovered": 4, 00:10:01.808 "num_base_bdevs_operational": 4, 00:10:01.808 "base_bdevs_list": [ 00:10:01.808 { 00:10:01.809 "name": "BaseBdev1", 00:10:01.809 "uuid": "70ba3e89-358c-4333-954f-d01b7a2b83c2", 00:10:01.809 "is_configured": true, 00:10:01.809 "data_offset": 0, 00:10:01.809 "data_size": 65536 00:10:01.809 }, 00:10:01.809 { 00:10:01.809 "name": "BaseBdev2", 00:10:01.809 "uuid": "8e0294a5-a1e4-4241-99e1-5c58a517e91a", 00:10:01.809 "is_configured": true, 00:10:01.809 "data_offset": 0, 00:10:01.809 "data_size": 65536 00:10:01.809 }, 00:10:01.809 { 00:10:01.809 "name": "BaseBdev3", 00:10:01.809 "uuid": "e8532ff9-2403-470e-89ed-658d430b8b8a", 00:10:01.809 "is_configured": true, 00:10:01.809 "data_offset": 0, 00:10:01.809 "data_size": 65536 00:10:01.809 }, 00:10:01.809 { 00:10:01.809 "name": "BaseBdev4", 00:10:01.809 "uuid": "4c51b7b3-3db9-41c2-b147-e4a1037be723", 00:10:01.809 "is_configured": true, 00:10:01.809 "data_offset": 0, 00:10:01.809 "data_size": 65536 00:10:01.809 } 00:10:01.809 ] 00:10:01.809 } 00:10:01.809 } 00:10:01.809 }' 00:10:01.809 13:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:01.809 13:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:01.809 BaseBdev2 00:10:01.809 BaseBdev3 00:10:01.809 BaseBdev4' 00:10:01.809 13:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:01.809 13:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:01.809 13:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:01.809 13:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:01.809 13:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:01.809 13:19:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.809 13:19:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.068 13:19:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.068 13:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:02.068 13:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:02.068 13:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:02.068 13:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:02.068 13:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:02.068 13:19:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.068 13:19:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.068 13:19:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.068 13:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:02.068 13:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:02.068 13:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:02.068 13:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:02.068 13:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:02.068 13:19:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.068 13:19:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.068 13:19:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.068 13:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:02.068 13:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:02.068 13:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:02.068 13:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:02.068 13:19:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.068 13:19:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.068 13:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:02.068 13:19:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.068 13:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:02.068 13:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:02.068 13:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:02.068 13:19:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.068 13:19:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.068 [2024-11-17 13:19:51.200489] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:02.068 [2024-11-17 13:19:51.200521] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:02.068 [2024-11-17 13:19:51.200572] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:02.328 13:19:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.328 13:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:02.328 13:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:10:02.328 13:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:02.328 13:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:02.328 13:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:02.328 13:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:10:02.328 13:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:02.328 13:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:02.328 13:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:02.328 13:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:02.328 13:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:02.328 13:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:02.328 13:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:02.328 13:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:02.328 13:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:02.328 13:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.328 13:19:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.328 13:19:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.328 13:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:02.328 13:19:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.328 13:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:02.328 "name": "Existed_Raid", 00:10:02.328 "uuid": "8962d769-fe7f-4f70-abf6-5aeedc0f1e6b", 00:10:02.328 "strip_size_kb": 64, 00:10:02.328 "state": "offline", 00:10:02.328 "raid_level": "raid0", 00:10:02.328 "superblock": false, 00:10:02.328 "num_base_bdevs": 4, 00:10:02.328 "num_base_bdevs_discovered": 3, 00:10:02.328 "num_base_bdevs_operational": 3, 00:10:02.328 "base_bdevs_list": [ 00:10:02.328 { 00:10:02.328 "name": null, 00:10:02.328 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:02.328 "is_configured": false, 00:10:02.328 "data_offset": 0, 00:10:02.328 "data_size": 65536 00:10:02.328 }, 00:10:02.328 { 00:10:02.328 "name": "BaseBdev2", 00:10:02.328 "uuid": "8e0294a5-a1e4-4241-99e1-5c58a517e91a", 00:10:02.328 "is_configured": true, 00:10:02.328 "data_offset": 0, 00:10:02.328 "data_size": 65536 00:10:02.328 }, 00:10:02.328 { 00:10:02.328 "name": "BaseBdev3", 00:10:02.328 "uuid": "e8532ff9-2403-470e-89ed-658d430b8b8a", 00:10:02.328 "is_configured": true, 00:10:02.328 "data_offset": 0, 00:10:02.328 "data_size": 65536 00:10:02.328 }, 00:10:02.328 { 00:10:02.328 "name": "BaseBdev4", 00:10:02.328 "uuid": "4c51b7b3-3db9-41c2-b147-e4a1037be723", 00:10:02.328 "is_configured": true, 00:10:02.328 "data_offset": 0, 00:10:02.328 "data_size": 65536 00:10:02.328 } 00:10:02.328 ] 00:10:02.328 }' 00:10:02.328 13:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:02.328 13:19:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.588 13:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:02.588 13:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:02.588 13:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.588 13:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:02.588 13:19:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.588 13:19:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.588 13:19:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.588 13:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:02.588 13:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:02.588 13:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:02.588 13:19:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.588 13:19:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.588 [2024-11-17 13:19:51.776504] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:02.853 13:19:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.853 13:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:02.853 13:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:02.853 13:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.853 13:19:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.853 13:19:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.853 13:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:02.853 13:19:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.853 13:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:02.853 13:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:02.853 13:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:02.853 13:19:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.853 13:19:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.853 [2024-11-17 13:19:51.930540] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:02.853 13:19:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.853 13:19:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:02.853 13:19:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:02.853 13:19:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.853 13:19:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.853 13:19:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:02.853 13:19:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.853 13:19:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.122 13:19:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:03.122 13:19:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:03.122 13:19:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:03.122 13:19:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.122 13:19:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.122 [2024-11-17 13:19:52.085409] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:03.122 [2024-11-17 13:19:52.085510] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:03.122 13:19:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.122 13:19:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:03.122 13:19:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:03.122 13:19:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.122 13:19:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:03.122 13:19:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.122 13:19:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.122 13:19:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.122 13:19:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:03.122 13:19:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:03.122 13:19:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:03.122 13:19:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:03.122 13:19:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:03.122 13:19:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:03.122 13:19:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.122 13:19:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.122 BaseBdev2 00:10:03.122 13:19:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.122 13:19:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:03.122 13:19:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:03.122 13:19:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:03.122 13:19:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:03.122 13:19:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:03.122 13:19:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:03.122 13:19:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:03.122 13:19:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.122 13:19:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.122 13:19:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.122 13:19:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:03.122 13:19:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.122 13:19:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.122 [ 00:10:03.122 { 00:10:03.122 "name": "BaseBdev2", 00:10:03.122 "aliases": [ 00:10:03.122 "fd034eff-a0e6-466a-ac95-adb892fcb8c7" 00:10:03.122 ], 00:10:03.122 "product_name": "Malloc disk", 00:10:03.122 "block_size": 512, 00:10:03.122 "num_blocks": 65536, 00:10:03.122 "uuid": "fd034eff-a0e6-466a-ac95-adb892fcb8c7", 00:10:03.122 "assigned_rate_limits": { 00:10:03.122 "rw_ios_per_sec": 0, 00:10:03.122 "rw_mbytes_per_sec": 0, 00:10:03.122 "r_mbytes_per_sec": 0, 00:10:03.122 "w_mbytes_per_sec": 0 00:10:03.122 }, 00:10:03.122 "claimed": false, 00:10:03.122 "zoned": false, 00:10:03.122 "supported_io_types": { 00:10:03.122 "read": true, 00:10:03.122 "write": true, 00:10:03.122 "unmap": true, 00:10:03.122 "flush": true, 00:10:03.122 "reset": true, 00:10:03.122 "nvme_admin": false, 00:10:03.122 "nvme_io": false, 00:10:03.122 "nvme_io_md": false, 00:10:03.122 "write_zeroes": true, 00:10:03.122 "zcopy": true, 00:10:03.122 "get_zone_info": false, 00:10:03.122 "zone_management": false, 00:10:03.122 "zone_append": false, 00:10:03.122 "compare": false, 00:10:03.122 "compare_and_write": false, 00:10:03.122 "abort": true, 00:10:03.122 "seek_hole": false, 00:10:03.122 "seek_data": false, 00:10:03.122 "copy": true, 00:10:03.122 "nvme_iov_md": false 00:10:03.122 }, 00:10:03.122 "memory_domains": [ 00:10:03.122 { 00:10:03.122 "dma_device_id": "system", 00:10:03.122 "dma_device_type": 1 00:10:03.122 }, 00:10:03.122 { 00:10:03.122 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:03.122 "dma_device_type": 2 00:10:03.122 } 00:10:03.122 ], 00:10:03.122 "driver_specific": {} 00:10:03.122 } 00:10:03.122 ] 00:10:03.122 13:19:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.122 13:19:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:03.122 13:19:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:03.122 13:19:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:03.122 13:19:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:03.122 13:19:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.122 13:19:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.382 BaseBdev3 00:10:03.382 13:19:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.382 13:19:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:03.382 13:19:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:03.382 13:19:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:03.382 13:19:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:03.382 13:19:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:03.382 13:19:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:03.382 13:19:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:03.382 13:19:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.382 13:19:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.382 13:19:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.382 13:19:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:03.382 13:19:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.382 13:19:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.382 [ 00:10:03.382 { 00:10:03.382 "name": "BaseBdev3", 00:10:03.382 "aliases": [ 00:10:03.382 "619fb688-8062-49e9-b00a-5e003fdc9d24" 00:10:03.382 ], 00:10:03.382 "product_name": "Malloc disk", 00:10:03.382 "block_size": 512, 00:10:03.382 "num_blocks": 65536, 00:10:03.382 "uuid": "619fb688-8062-49e9-b00a-5e003fdc9d24", 00:10:03.382 "assigned_rate_limits": { 00:10:03.382 "rw_ios_per_sec": 0, 00:10:03.382 "rw_mbytes_per_sec": 0, 00:10:03.382 "r_mbytes_per_sec": 0, 00:10:03.382 "w_mbytes_per_sec": 0 00:10:03.382 }, 00:10:03.382 "claimed": false, 00:10:03.382 "zoned": false, 00:10:03.382 "supported_io_types": { 00:10:03.382 "read": true, 00:10:03.382 "write": true, 00:10:03.382 "unmap": true, 00:10:03.382 "flush": true, 00:10:03.382 "reset": true, 00:10:03.382 "nvme_admin": false, 00:10:03.382 "nvme_io": false, 00:10:03.382 "nvme_io_md": false, 00:10:03.382 "write_zeroes": true, 00:10:03.382 "zcopy": true, 00:10:03.382 "get_zone_info": false, 00:10:03.382 "zone_management": false, 00:10:03.382 "zone_append": false, 00:10:03.382 "compare": false, 00:10:03.382 "compare_and_write": false, 00:10:03.382 "abort": true, 00:10:03.382 "seek_hole": false, 00:10:03.382 "seek_data": false, 00:10:03.382 "copy": true, 00:10:03.382 "nvme_iov_md": false 00:10:03.382 }, 00:10:03.382 "memory_domains": [ 00:10:03.382 { 00:10:03.382 "dma_device_id": "system", 00:10:03.382 "dma_device_type": 1 00:10:03.382 }, 00:10:03.382 { 00:10:03.382 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:03.382 "dma_device_type": 2 00:10:03.382 } 00:10:03.382 ], 00:10:03.382 "driver_specific": {} 00:10:03.382 } 00:10:03.382 ] 00:10:03.382 13:19:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.382 13:19:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:03.382 13:19:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:03.382 13:19:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:03.382 13:19:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:03.382 13:19:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.382 13:19:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.382 BaseBdev4 00:10:03.382 13:19:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.382 13:19:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:10:03.382 13:19:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:03.382 13:19:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:03.382 13:19:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:03.382 13:19:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:03.382 13:19:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:03.382 13:19:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:03.382 13:19:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.382 13:19:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.382 13:19:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.382 13:19:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:03.382 13:19:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.382 13:19:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.382 [ 00:10:03.382 { 00:10:03.382 "name": "BaseBdev4", 00:10:03.382 "aliases": [ 00:10:03.382 "3dd7fc97-6ac7-47b3-bc18-0883c5e74bb6" 00:10:03.382 ], 00:10:03.382 "product_name": "Malloc disk", 00:10:03.382 "block_size": 512, 00:10:03.382 "num_blocks": 65536, 00:10:03.382 "uuid": "3dd7fc97-6ac7-47b3-bc18-0883c5e74bb6", 00:10:03.382 "assigned_rate_limits": { 00:10:03.382 "rw_ios_per_sec": 0, 00:10:03.382 "rw_mbytes_per_sec": 0, 00:10:03.382 "r_mbytes_per_sec": 0, 00:10:03.382 "w_mbytes_per_sec": 0 00:10:03.382 }, 00:10:03.382 "claimed": false, 00:10:03.382 "zoned": false, 00:10:03.382 "supported_io_types": { 00:10:03.382 "read": true, 00:10:03.383 "write": true, 00:10:03.383 "unmap": true, 00:10:03.383 "flush": true, 00:10:03.383 "reset": true, 00:10:03.383 "nvme_admin": false, 00:10:03.383 "nvme_io": false, 00:10:03.383 "nvme_io_md": false, 00:10:03.383 "write_zeroes": true, 00:10:03.383 "zcopy": true, 00:10:03.383 "get_zone_info": false, 00:10:03.383 "zone_management": false, 00:10:03.383 "zone_append": false, 00:10:03.383 "compare": false, 00:10:03.383 "compare_and_write": false, 00:10:03.383 "abort": true, 00:10:03.383 "seek_hole": false, 00:10:03.383 "seek_data": false, 00:10:03.383 "copy": true, 00:10:03.383 "nvme_iov_md": false 00:10:03.383 }, 00:10:03.383 "memory_domains": [ 00:10:03.383 { 00:10:03.383 "dma_device_id": "system", 00:10:03.383 "dma_device_type": 1 00:10:03.383 }, 00:10:03.383 { 00:10:03.383 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:03.383 "dma_device_type": 2 00:10:03.383 } 00:10:03.383 ], 00:10:03.383 "driver_specific": {} 00:10:03.383 } 00:10:03.383 ] 00:10:03.383 13:19:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.383 13:19:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:03.383 13:19:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:03.383 13:19:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:03.383 13:19:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:03.383 13:19:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.383 13:19:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.383 [2024-11-17 13:19:52.482220] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:03.383 [2024-11-17 13:19:52.482329] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:03.383 [2024-11-17 13:19:52.482374] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:03.383 [2024-11-17 13:19:52.484410] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:03.383 [2024-11-17 13:19:52.484523] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:03.383 13:19:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.383 13:19:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:03.383 13:19:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:03.383 13:19:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:03.383 13:19:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:03.383 13:19:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:03.383 13:19:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:03.383 13:19:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:03.383 13:19:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:03.383 13:19:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:03.383 13:19:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:03.383 13:19:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.383 13:19:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.383 13:19:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.383 13:19:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:03.383 13:19:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.383 13:19:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:03.383 "name": "Existed_Raid", 00:10:03.383 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:03.383 "strip_size_kb": 64, 00:10:03.383 "state": "configuring", 00:10:03.383 "raid_level": "raid0", 00:10:03.383 "superblock": false, 00:10:03.383 "num_base_bdevs": 4, 00:10:03.383 "num_base_bdevs_discovered": 3, 00:10:03.383 "num_base_bdevs_operational": 4, 00:10:03.383 "base_bdevs_list": [ 00:10:03.383 { 00:10:03.383 "name": "BaseBdev1", 00:10:03.383 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:03.383 "is_configured": false, 00:10:03.383 "data_offset": 0, 00:10:03.383 "data_size": 0 00:10:03.383 }, 00:10:03.383 { 00:10:03.383 "name": "BaseBdev2", 00:10:03.383 "uuid": "fd034eff-a0e6-466a-ac95-adb892fcb8c7", 00:10:03.383 "is_configured": true, 00:10:03.383 "data_offset": 0, 00:10:03.383 "data_size": 65536 00:10:03.383 }, 00:10:03.383 { 00:10:03.383 "name": "BaseBdev3", 00:10:03.383 "uuid": "619fb688-8062-49e9-b00a-5e003fdc9d24", 00:10:03.383 "is_configured": true, 00:10:03.383 "data_offset": 0, 00:10:03.383 "data_size": 65536 00:10:03.383 }, 00:10:03.383 { 00:10:03.383 "name": "BaseBdev4", 00:10:03.383 "uuid": "3dd7fc97-6ac7-47b3-bc18-0883c5e74bb6", 00:10:03.383 "is_configured": true, 00:10:03.383 "data_offset": 0, 00:10:03.383 "data_size": 65536 00:10:03.383 } 00:10:03.383 ] 00:10:03.383 }' 00:10:03.383 13:19:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:03.383 13:19:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.951 13:19:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:03.951 13:19:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.951 13:19:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.951 [2024-11-17 13:19:52.969387] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:03.951 13:19:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.951 13:19:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:03.951 13:19:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:03.951 13:19:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:03.951 13:19:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:03.951 13:19:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:03.951 13:19:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:03.951 13:19:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:03.951 13:19:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:03.951 13:19:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:03.951 13:19:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:03.951 13:19:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.951 13:19:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:03.951 13:19:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.951 13:19:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.951 13:19:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.951 13:19:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:03.951 "name": "Existed_Raid", 00:10:03.951 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:03.951 "strip_size_kb": 64, 00:10:03.951 "state": "configuring", 00:10:03.951 "raid_level": "raid0", 00:10:03.951 "superblock": false, 00:10:03.951 "num_base_bdevs": 4, 00:10:03.951 "num_base_bdevs_discovered": 2, 00:10:03.951 "num_base_bdevs_operational": 4, 00:10:03.951 "base_bdevs_list": [ 00:10:03.951 { 00:10:03.951 "name": "BaseBdev1", 00:10:03.951 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:03.951 "is_configured": false, 00:10:03.951 "data_offset": 0, 00:10:03.951 "data_size": 0 00:10:03.951 }, 00:10:03.951 { 00:10:03.951 "name": null, 00:10:03.951 "uuid": "fd034eff-a0e6-466a-ac95-adb892fcb8c7", 00:10:03.951 "is_configured": false, 00:10:03.952 "data_offset": 0, 00:10:03.952 "data_size": 65536 00:10:03.952 }, 00:10:03.952 { 00:10:03.952 "name": "BaseBdev3", 00:10:03.952 "uuid": "619fb688-8062-49e9-b00a-5e003fdc9d24", 00:10:03.952 "is_configured": true, 00:10:03.952 "data_offset": 0, 00:10:03.952 "data_size": 65536 00:10:03.952 }, 00:10:03.952 { 00:10:03.952 "name": "BaseBdev4", 00:10:03.952 "uuid": "3dd7fc97-6ac7-47b3-bc18-0883c5e74bb6", 00:10:03.952 "is_configured": true, 00:10:03.952 "data_offset": 0, 00:10:03.952 "data_size": 65536 00:10:03.952 } 00:10:03.952 ] 00:10:03.952 }' 00:10:03.952 13:19:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:03.952 13:19:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.211 13:19:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:04.211 13:19:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:04.211 13:19:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.211 13:19:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.211 13:19:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.471 13:19:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:04.471 13:19:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:04.471 13:19:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.471 13:19:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.471 [2024-11-17 13:19:53.477914] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:04.471 BaseBdev1 00:10:04.471 13:19:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.471 13:19:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:04.471 13:19:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:04.471 13:19:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:04.471 13:19:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:04.471 13:19:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:04.471 13:19:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:04.471 13:19:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:04.471 13:19:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.471 13:19:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.471 13:19:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.471 13:19:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:04.471 13:19:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.471 13:19:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.471 [ 00:10:04.471 { 00:10:04.471 "name": "BaseBdev1", 00:10:04.471 "aliases": [ 00:10:04.471 "a792f81b-8268-44c8-9d98-e50b57e9e02f" 00:10:04.471 ], 00:10:04.471 "product_name": "Malloc disk", 00:10:04.471 "block_size": 512, 00:10:04.471 "num_blocks": 65536, 00:10:04.471 "uuid": "a792f81b-8268-44c8-9d98-e50b57e9e02f", 00:10:04.472 "assigned_rate_limits": { 00:10:04.472 "rw_ios_per_sec": 0, 00:10:04.472 "rw_mbytes_per_sec": 0, 00:10:04.472 "r_mbytes_per_sec": 0, 00:10:04.472 "w_mbytes_per_sec": 0 00:10:04.472 }, 00:10:04.472 "claimed": true, 00:10:04.472 "claim_type": "exclusive_write", 00:10:04.472 "zoned": false, 00:10:04.472 "supported_io_types": { 00:10:04.472 "read": true, 00:10:04.472 "write": true, 00:10:04.472 "unmap": true, 00:10:04.472 "flush": true, 00:10:04.472 "reset": true, 00:10:04.472 "nvme_admin": false, 00:10:04.472 "nvme_io": false, 00:10:04.472 "nvme_io_md": false, 00:10:04.472 "write_zeroes": true, 00:10:04.472 "zcopy": true, 00:10:04.472 "get_zone_info": false, 00:10:04.472 "zone_management": false, 00:10:04.472 "zone_append": false, 00:10:04.472 "compare": false, 00:10:04.472 "compare_and_write": false, 00:10:04.472 "abort": true, 00:10:04.472 "seek_hole": false, 00:10:04.472 "seek_data": false, 00:10:04.472 "copy": true, 00:10:04.472 "nvme_iov_md": false 00:10:04.472 }, 00:10:04.472 "memory_domains": [ 00:10:04.472 { 00:10:04.472 "dma_device_id": "system", 00:10:04.472 "dma_device_type": 1 00:10:04.472 }, 00:10:04.472 { 00:10:04.472 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:04.472 "dma_device_type": 2 00:10:04.472 } 00:10:04.472 ], 00:10:04.472 "driver_specific": {} 00:10:04.472 } 00:10:04.472 ] 00:10:04.472 13:19:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.472 13:19:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:04.472 13:19:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:04.472 13:19:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:04.472 13:19:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:04.472 13:19:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:04.472 13:19:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:04.472 13:19:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:04.472 13:19:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:04.472 13:19:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:04.472 13:19:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:04.472 13:19:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:04.472 13:19:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:04.472 13:19:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:04.472 13:19:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.472 13:19:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.472 13:19:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.472 13:19:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:04.472 "name": "Existed_Raid", 00:10:04.472 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:04.472 "strip_size_kb": 64, 00:10:04.472 "state": "configuring", 00:10:04.472 "raid_level": "raid0", 00:10:04.472 "superblock": false, 00:10:04.472 "num_base_bdevs": 4, 00:10:04.472 "num_base_bdevs_discovered": 3, 00:10:04.472 "num_base_bdevs_operational": 4, 00:10:04.472 "base_bdevs_list": [ 00:10:04.472 { 00:10:04.472 "name": "BaseBdev1", 00:10:04.472 "uuid": "a792f81b-8268-44c8-9d98-e50b57e9e02f", 00:10:04.472 "is_configured": true, 00:10:04.472 "data_offset": 0, 00:10:04.472 "data_size": 65536 00:10:04.472 }, 00:10:04.472 { 00:10:04.472 "name": null, 00:10:04.472 "uuid": "fd034eff-a0e6-466a-ac95-adb892fcb8c7", 00:10:04.472 "is_configured": false, 00:10:04.472 "data_offset": 0, 00:10:04.472 "data_size": 65536 00:10:04.472 }, 00:10:04.472 { 00:10:04.472 "name": "BaseBdev3", 00:10:04.472 "uuid": "619fb688-8062-49e9-b00a-5e003fdc9d24", 00:10:04.472 "is_configured": true, 00:10:04.472 "data_offset": 0, 00:10:04.472 "data_size": 65536 00:10:04.472 }, 00:10:04.472 { 00:10:04.472 "name": "BaseBdev4", 00:10:04.472 "uuid": "3dd7fc97-6ac7-47b3-bc18-0883c5e74bb6", 00:10:04.472 "is_configured": true, 00:10:04.472 "data_offset": 0, 00:10:04.472 "data_size": 65536 00:10:04.472 } 00:10:04.472 ] 00:10:04.472 }' 00:10:04.472 13:19:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:04.472 13:19:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.731 13:19:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:04.731 13:19:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.731 13:19:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.731 13:19:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:04.732 13:19:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.991 13:19:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:04.991 13:19:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:04.992 13:19:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.992 13:19:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.992 [2024-11-17 13:19:53.993147] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:04.992 13:19:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.992 13:19:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:04.992 13:19:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:04.992 13:19:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:04.992 13:19:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:04.992 13:19:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:04.992 13:19:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:04.992 13:19:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:04.992 13:19:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:04.992 13:19:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:04.992 13:19:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:04.992 13:19:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:04.992 13:19:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.992 13:19:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.992 13:19:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:04.992 13:19:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.992 13:19:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:04.992 "name": "Existed_Raid", 00:10:04.992 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:04.992 "strip_size_kb": 64, 00:10:04.992 "state": "configuring", 00:10:04.992 "raid_level": "raid0", 00:10:04.992 "superblock": false, 00:10:04.992 "num_base_bdevs": 4, 00:10:04.992 "num_base_bdevs_discovered": 2, 00:10:04.992 "num_base_bdevs_operational": 4, 00:10:04.992 "base_bdevs_list": [ 00:10:04.992 { 00:10:04.992 "name": "BaseBdev1", 00:10:04.992 "uuid": "a792f81b-8268-44c8-9d98-e50b57e9e02f", 00:10:04.992 "is_configured": true, 00:10:04.992 "data_offset": 0, 00:10:04.992 "data_size": 65536 00:10:04.992 }, 00:10:04.992 { 00:10:04.992 "name": null, 00:10:04.992 "uuid": "fd034eff-a0e6-466a-ac95-adb892fcb8c7", 00:10:04.992 "is_configured": false, 00:10:04.992 "data_offset": 0, 00:10:04.992 "data_size": 65536 00:10:04.992 }, 00:10:04.992 { 00:10:04.992 "name": null, 00:10:04.992 "uuid": "619fb688-8062-49e9-b00a-5e003fdc9d24", 00:10:04.992 "is_configured": false, 00:10:04.992 "data_offset": 0, 00:10:04.992 "data_size": 65536 00:10:04.992 }, 00:10:04.992 { 00:10:04.992 "name": "BaseBdev4", 00:10:04.992 "uuid": "3dd7fc97-6ac7-47b3-bc18-0883c5e74bb6", 00:10:04.992 "is_configured": true, 00:10:04.992 "data_offset": 0, 00:10:04.992 "data_size": 65536 00:10:04.992 } 00:10:04.992 ] 00:10:04.992 }' 00:10:04.992 13:19:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:04.992 13:19:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.251 13:19:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.252 13:19:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:05.252 13:19:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.252 13:19:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.252 13:19:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.252 13:19:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:05.252 13:19:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:05.252 13:19:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.252 13:19:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.252 [2024-11-17 13:19:54.456372] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:05.252 13:19:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.252 13:19:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:05.252 13:19:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:05.252 13:19:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:05.252 13:19:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:05.252 13:19:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:05.252 13:19:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:05.252 13:19:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:05.252 13:19:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:05.252 13:19:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:05.252 13:19:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:05.252 13:19:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:05.252 13:19:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.252 13:19:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.252 13:19:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.512 13:19:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.512 13:19:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:05.512 "name": "Existed_Raid", 00:10:05.512 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:05.512 "strip_size_kb": 64, 00:10:05.512 "state": "configuring", 00:10:05.512 "raid_level": "raid0", 00:10:05.512 "superblock": false, 00:10:05.512 "num_base_bdevs": 4, 00:10:05.512 "num_base_bdevs_discovered": 3, 00:10:05.512 "num_base_bdevs_operational": 4, 00:10:05.512 "base_bdevs_list": [ 00:10:05.512 { 00:10:05.512 "name": "BaseBdev1", 00:10:05.512 "uuid": "a792f81b-8268-44c8-9d98-e50b57e9e02f", 00:10:05.512 "is_configured": true, 00:10:05.512 "data_offset": 0, 00:10:05.512 "data_size": 65536 00:10:05.512 }, 00:10:05.512 { 00:10:05.512 "name": null, 00:10:05.512 "uuid": "fd034eff-a0e6-466a-ac95-adb892fcb8c7", 00:10:05.512 "is_configured": false, 00:10:05.512 "data_offset": 0, 00:10:05.512 "data_size": 65536 00:10:05.512 }, 00:10:05.512 { 00:10:05.512 "name": "BaseBdev3", 00:10:05.512 "uuid": "619fb688-8062-49e9-b00a-5e003fdc9d24", 00:10:05.512 "is_configured": true, 00:10:05.512 "data_offset": 0, 00:10:05.512 "data_size": 65536 00:10:05.512 }, 00:10:05.512 { 00:10:05.512 "name": "BaseBdev4", 00:10:05.512 "uuid": "3dd7fc97-6ac7-47b3-bc18-0883c5e74bb6", 00:10:05.512 "is_configured": true, 00:10:05.512 "data_offset": 0, 00:10:05.512 "data_size": 65536 00:10:05.512 } 00:10:05.512 ] 00:10:05.512 }' 00:10:05.512 13:19:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:05.512 13:19:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.772 13:19:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:05.772 13:19:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.772 13:19:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.772 13:19:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.772 13:19:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.772 13:19:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:05.772 13:19:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:05.772 13:19:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.772 13:19:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.772 [2024-11-17 13:19:54.899623] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:05.772 13:19:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.031 13:19:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:06.032 13:19:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:06.032 13:19:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:06.032 13:19:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:06.032 13:19:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:06.032 13:19:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:06.032 13:19:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:06.032 13:19:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:06.032 13:19:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:06.032 13:19:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:06.032 13:19:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:06.032 13:19:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.032 13:19:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.032 13:19:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.032 13:19:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.032 13:19:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:06.032 "name": "Existed_Raid", 00:10:06.032 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:06.032 "strip_size_kb": 64, 00:10:06.032 "state": "configuring", 00:10:06.032 "raid_level": "raid0", 00:10:06.032 "superblock": false, 00:10:06.032 "num_base_bdevs": 4, 00:10:06.032 "num_base_bdevs_discovered": 2, 00:10:06.032 "num_base_bdevs_operational": 4, 00:10:06.032 "base_bdevs_list": [ 00:10:06.032 { 00:10:06.032 "name": null, 00:10:06.032 "uuid": "a792f81b-8268-44c8-9d98-e50b57e9e02f", 00:10:06.032 "is_configured": false, 00:10:06.032 "data_offset": 0, 00:10:06.032 "data_size": 65536 00:10:06.032 }, 00:10:06.032 { 00:10:06.032 "name": null, 00:10:06.032 "uuid": "fd034eff-a0e6-466a-ac95-adb892fcb8c7", 00:10:06.032 "is_configured": false, 00:10:06.032 "data_offset": 0, 00:10:06.032 "data_size": 65536 00:10:06.032 }, 00:10:06.032 { 00:10:06.032 "name": "BaseBdev3", 00:10:06.032 "uuid": "619fb688-8062-49e9-b00a-5e003fdc9d24", 00:10:06.032 "is_configured": true, 00:10:06.032 "data_offset": 0, 00:10:06.032 "data_size": 65536 00:10:06.032 }, 00:10:06.032 { 00:10:06.032 "name": "BaseBdev4", 00:10:06.032 "uuid": "3dd7fc97-6ac7-47b3-bc18-0883c5e74bb6", 00:10:06.032 "is_configured": true, 00:10:06.032 "data_offset": 0, 00:10:06.032 "data_size": 65536 00:10:06.032 } 00:10:06.032 ] 00:10:06.032 }' 00:10:06.032 13:19:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:06.032 13:19:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.292 13:19:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:06.292 13:19:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.292 13:19:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.292 13:19:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.292 13:19:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.292 13:19:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:06.292 13:19:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:06.292 13:19:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.292 13:19:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.292 [2024-11-17 13:19:55.449314] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:06.292 13:19:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.292 13:19:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:06.292 13:19:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:06.292 13:19:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:06.292 13:19:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:06.292 13:19:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:06.292 13:19:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:06.292 13:19:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:06.292 13:19:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:06.292 13:19:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:06.292 13:19:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:06.292 13:19:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.292 13:19:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.292 13:19:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.292 13:19:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:06.292 13:19:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.292 13:19:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:06.292 "name": "Existed_Raid", 00:10:06.292 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:06.292 "strip_size_kb": 64, 00:10:06.292 "state": "configuring", 00:10:06.292 "raid_level": "raid0", 00:10:06.292 "superblock": false, 00:10:06.292 "num_base_bdevs": 4, 00:10:06.292 "num_base_bdevs_discovered": 3, 00:10:06.292 "num_base_bdevs_operational": 4, 00:10:06.292 "base_bdevs_list": [ 00:10:06.292 { 00:10:06.292 "name": null, 00:10:06.292 "uuid": "a792f81b-8268-44c8-9d98-e50b57e9e02f", 00:10:06.292 "is_configured": false, 00:10:06.292 "data_offset": 0, 00:10:06.292 "data_size": 65536 00:10:06.292 }, 00:10:06.292 { 00:10:06.292 "name": "BaseBdev2", 00:10:06.292 "uuid": "fd034eff-a0e6-466a-ac95-adb892fcb8c7", 00:10:06.292 "is_configured": true, 00:10:06.292 "data_offset": 0, 00:10:06.292 "data_size": 65536 00:10:06.292 }, 00:10:06.292 { 00:10:06.292 "name": "BaseBdev3", 00:10:06.292 "uuid": "619fb688-8062-49e9-b00a-5e003fdc9d24", 00:10:06.292 "is_configured": true, 00:10:06.292 "data_offset": 0, 00:10:06.292 "data_size": 65536 00:10:06.292 }, 00:10:06.292 { 00:10:06.292 "name": "BaseBdev4", 00:10:06.292 "uuid": "3dd7fc97-6ac7-47b3-bc18-0883c5e74bb6", 00:10:06.292 "is_configured": true, 00:10:06.292 "data_offset": 0, 00:10:06.292 "data_size": 65536 00:10:06.292 } 00:10:06.292 ] 00:10:06.292 }' 00:10:06.292 13:19:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:06.292 13:19:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.863 13:19:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.863 13:19:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.863 13:19:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.863 13:19:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:06.863 13:19:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.863 13:19:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:06.863 13:19:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.863 13:19:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:06.863 13:19:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.863 13:19:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.863 13:19:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.863 13:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u a792f81b-8268-44c8-9d98-e50b57e9e02f 00:10:06.863 13:19:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.863 13:19:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.863 [2024-11-17 13:19:56.046340] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:06.863 [2024-11-17 13:19:56.046390] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:06.863 [2024-11-17 13:19:56.046397] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:10:06.863 [2024-11-17 13:19:56.046644] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:10:06.863 [2024-11-17 13:19:56.046787] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:06.863 [2024-11-17 13:19:56.046800] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:06.863 [2024-11-17 13:19:56.047057] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:06.863 NewBaseBdev 00:10:06.863 13:19:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.863 13:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:06.863 13:19:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:10:06.863 13:19:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:06.863 13:19:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:06.863 13:19:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:06.863 13:19:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:06.863 13:19:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:06.863 13:19:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.863 13:19:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.863 13:19:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.863 13:19:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:06.863 13:19:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.863 13:19:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.863 [ 00:10:06.863 { 00:10:06.863 "name": "NewBaseBdev", 00:10:06.863 "aliases": [ 00:10:06.863 "a792f81b-8268-44c8-9d98-e50b57e9e02f" 00:10:06.863 ], 00:10:06.863 "product_name": "Malloc disk", 00:10:06.863 "block_size": 512, 00:10:06.863 "num_blocks": 65536, 00:10:06.863 "uuid": "a792f81b-8268-44c8-9d98-e50b57e9e02f", 00:10:06.863 "assigned_rate_limits": { 00:10:06.863 "rw_ios_per_sec": 0, 00:10:06.863 "rw_mbytes_per_sec": 0, 00:10:06.863 "r_mbytes_per_sec": 0, 00:10:06.863 "w_mbytes_per_sec": 0 00:10:06.863 }, 00:10:06.863 "claimed": true, 00:10:06.863 "claim_type": "exclusive_write", 00:10:06.863 "zoned": false, 00:10:06.863 "supported_io_types": { 00:10:06.863 "read": true, 00:10:06.864 "write": true, 00:10:06.864 "unmap": true, 00:10:06.864 "flush": true, 00:10:06.864 "reset": true, 00:10:06.864 "nvme_admin": false, 00:10:06.864 "nvme_io": false, 00:10:06.864 "nvme_io_md": false, 00:10:06.864 "write_zeroes": true, 00:10:06.864 "zcopy": true, 00:10:06.864 "get_zone_info": false, 00:10:06.864 "zone_management": false, 00:10:06.864 "zone_append": false, 00:10:06.864 "compare": false, 00:10:06.864 "compare_and_write": false, 00:10:06.864 "abort": true, 00:10:06.864 "seek_hole": false, 00:10:06.864 "seek_data": false, 00:10:06.864 "copy": true, 00:10:06.864 "nvme_iov_md": false 00:10:06.864 }, 00:10:06.864 "memory_domains": [ 00:10:06.864 { 00:10:06.864 "dma_device_id": "system", 00:10:06.864 "dma_device_type": 1 00:10:06.864 }, 00:10:06.864 { 00:10:06.864 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:06.864 "dma_device_type": 2 00:10:06.864 } 00:10:06.864 ], 00:10:06.864 "driver_specific": {} 00:10:06.864 } 00:10:06.864 ] 00:10:07.124 13:19:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.124 13:19:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:07.124 13:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:10:07.124 13:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:07.124 13:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:07.124 13:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:07.124 13:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:07.124 13:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:07.124 13:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:07.124 13:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:07.124 13:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:07.124 13:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:07.124 13:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:07.124 13:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:07.124 13:19:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.124 13:19:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.124 13:19:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.124 13:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:07.124 "name": "Existed_Raid", 00:10:07.124 "uuid": "a53db862-897a-4d37-bc5b-3f2b9c97efd9", 00:10:07.124 "strip_size_kb": 64, 00:10:07.124 "state": "online", 00:10:07.124 "raid_level": "raid0", 00:10:07.124 "superblock": false, 00:10:07.124 "num_base_bdevs": 4, 00:10:07.124 "num_base_bdevs_discovered": 4, 00:10:07.124 "num_base_bdevs_operational": 4, 00:10:07.124 "base_bdevs_list": [ 00:10:07.124 { 00:10:07.124 "name": "NewBaseBdev", 00:10:07.124 "uuid": "a792f81b-8268-44c8-9d98-e50b57e9e02f", 00:10:07.124 "is_configured": true, 00:10:07.124 "data_offset": 0, 00:10:07.124 "data_size": 65536 00:10:07.124 }, 00:10:07.124 { 00:10:07.124 "name": "BaseBdev2", 00:10:07.124 "uuid": "fd034eff-a0e6-466a-ac95-adb892fcb8c7", 00:10:07.124 "is_configured": true, 00:10:07.124 "data_offset": 0, 00:10:07.124 "data_size": 65536 00:10:07.124 }, 00:10:07.124 { 00:10:07.124 "name": "BaseBdev3", 00:10:07.124 "uuid": "619fb688-8062-49e9-b00a-5e003fdc9d24", 00:10:07.124 "is_configured": true, 00:10:07.124 "data_offset": 0, 00:10:07.124 "data_size": 65536 00:10:07.124 }, 00:10:07.124 { 00:10:07.124 "name": "BaseBdev4", 00:10:07.124 "uuid": "3dd7fc97-6ac7-47b3-bc18-0883c5e74bb6", 00:10:07.124 "is_configured": true, 00:10:07.124 "data_offset": 0, 00:10:07.124 "data_size": 65536 00:10:07.124 } 00:10:07.124 ] 00:10:07.124 }' 00:10:07.124 13:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:07.124 13:19:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.384 13:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:07.384 13:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:07.384 13:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:07.384 13:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:07.384 13:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:07.384 13:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:07.384 13:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:07.384 13:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:07.384 13:19:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.384 13:19:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.384 [2024-11-17 13:19:56.525950] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:07.384 13:19:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.384 13:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:07.384 "name": "Existed_Raid", 00:10:07.384 "aliases": [ 00:10:07.384 "a53db862-897a-4d37-bc5b-3f2b9c97efd9" 00:10:07.384 ], 00:10:07.384 "product_name": "Raid Volume", 00:10:07.384 "block_size": 512, 00:10:07.384 "num_blocks": 262144, 00:10:07.384 "uuid": "a53db862-897a-4d37-bc5b-3f2b9c97efd9", 00:10:07.384 "assigned_rate_limits": { 00:10:07.384 "rw_ios_per_sec": 0, 00:10:07.384 "rw_mbytes_per_sec": 0, 00:10:07.384 "r_mbytes_per_sec": 0, 00:10:07.384 "w_mbytes_per_sec": 0 00:10:07.384 }, 00:10:07.384 "claimed": false, 00:10:07.384 "zoned": false, 00:10:07.384 "supported_io_types": { 00:10:07.384 "read": true, 00:10:07.384 "write": true, 00:10:07.384 "unmap": true, 00:10:07.384 "flush": true, 00:10:07.384 "reset": true, 00:10:07.384 "nvme_admin": false, 00:10:07.384 "nvme_io": false, 00:10:07.384 "nvme_io_md": false, 00:10:07.384 "write_zeroes": true, 00:10:07.384 "zcopy": false, 00:10:07.384 "get_zone_info": false, 00:10:07.384 "zone_management": false, 00:10:07.384 "zone_append": false, 00:10:07.384 "compare": false, 00:10:07.384 "compare_and_write": false, 00:10:07.384 "abort": false, 00:10:07.384 "seek_hole": false, 00:10:07.384 "seek_data": false, 00:10:07.384 "copy": false, 00:10:07.384 "nvme_iov_md": false 00:10:07.384 }, 00:10:07.384 "memory_domains": [ 00:10:07.384 { 00:10:07.384 "dma_device_id": "system", 00:10:07.384 "dma_device_type": 1 00:10:07.384 }, 00:10:07.384 { 00:10:07.384 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:07.384 "dma_device_type": 2 00:10:07.384 }, 00:10:07.384 { 00:10:07.384 "dma_device_id": "system", 00:10:07.384 "dma_device_type": 1 00:10:07.384 }, 00:10:07.384 { 00:10:07.384 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:07.384 "dma_device_type": 2 00:10:07.384 }, 00:10:07.384 { 00:10:07.384 "dma_device_id": "system", 00:10:07.384 "dma_device_type": 1 00:10:07.384 }, 00:10:07.384 { 00:10:07.384 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:07.384 "dma_device_type": 2 00:10:07.384 }, 00:10:07.384 { 00:10:07.384 "dma_device_id": "system", 00:10:07.384 "dma_device_type": 1 00:10:07.385 }, 00:10:07.385 { 00:10:07.385 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:07.385 "dma_device_type": 2 00:10:07.385 } 00:10:07.385 ], 00:10:07.385 "driver_specific": { 00:10:07.385 "raid": { 00:10:07.385 "uuid": "a53db862-897a-4d37-bc5b-3f2b9c97efd9", 00:10:07.385 "strip_size_kb": 64, 00:10:07.385 "state": "online", 00:10:07.385 "raid_level": "raid0", 00:10:07.385 "superblock": false, 00:10:07.385 "num_base_bdevs": 4, 00:10:07.385 "num_base_bdevs_discovered": 4, 00:10:07.385 "num_base_bdevs_operational": 4, 00:10:07.385 "base_bdevs_list": [ 00:10:07.385 { 00:10:07.385 "name": "NewBaseBdev", 00:10:07.385 "uuid": "a792f81b-8268-44c8-9d98-e50b57e9e02f", 00:10:07.385 "is_configured": true, 00:10:07.385 "data_offset": 0, 00:10:07.385 "data_size": 65536 00:10:07.385 }, 00:10:07.385 { 00:10:07.385 "name": "BaseBdev2", 00:10:07.385 "uuid": "fd034eff-a0e6-466a-ac95-adb892fcb8c7", 00:10:07.385 "is_configured": true, 00:10:07.385 "data_offset": 0, 00:10:07.385 "data_size": 65536 00:10:07.385 }, 00:10:07.385 { 00:10:07.385 "name": "BaseBdev3", 00:10:07.385 "uuid": "619fb688-8062-49e9-b00a-5e003fdc9d24", 00:10:07.385 "is_configured": true, 00:10:07.385 "data_offset": 0, 00:10:07.385 "data_size": 65536 00:10:07.385 }, 00:10:07.385 { 00:10:07.385 "name": "BaseBdev4", 00:10:07.385 "uuid": "3dd7fc97-6ac7-47b3-bc18-0883c5e74bb6", 00:10:07.385 "is_configured": true, 00:10:07.385 "data_offset": 0, 00:10:07.385 "data_size": 65536 00:10:07.385 } 00:10:07.385 ] 00:10:07.385 } 00:10:07.385 } 00:10:07.385 }' 00:10:07.385 13:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:07.385 13:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:07.385 BaseBdev2 00:10:07.385 BaseBdev3 00:10:07.385 BaseBdev4' 00:10:07.385 13:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:07.644 13:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:07.644 13:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:07.644 13:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:07.644 13:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:07.644 13:19:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.644 13:19:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.644 13:19:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.644 13:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:07.644 13:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:07.644 13:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:07.644 13:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:07.644 13:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:07.644 13:19:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.644 13:19:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.644 13:19:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.644 13:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:07.644 13:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:07.644 13:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:07.644 13:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:07.644 13:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:07.644 13:19:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.644 13:19:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.644 13:19:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.644 13:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:07.644 13:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:07.644 13:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:07.644 13:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:07.644 13:19:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.644 13:19:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.644 13:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:07.644 13:19:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.644 13:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:07.644 13:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:07.644 13:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:07.644 13:19:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.644 13:19:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.644 [2024-11-17 13:19:56.825050] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:07.644 [2024-11-17 13:19:56.825080] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:07.644 [2024-11-17 13:19:56.825153] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:07.644 [2024-11-17 13:19:56.825219] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:07.644 [2024-11-17 13:19:56.825244] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:07.644 13:19:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.644 13:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 69298 00:10:07.644 13:19:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 69298 ']' 00:10:07.644 13:19:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 69298 00:10:07.644 13:19:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:10:07.644 13:19:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:07.644 13:19:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69298 00:10:07.904 killing process with pid 69298 00:10:07.904 13:19:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:07.904 13:19:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:07.904 13:19:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69298' 00:10:07.904 13:19:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 69298 00:10:07.904 [2024-11-17 13:19:56.867934] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:07.904 13:19:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 69298 00:10:08.163 [2024-11-17 13:19:57.267757] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:09.542 13:19:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:10:09.542 00:10:09.542 real 0m11.295s 00:10:09.542 user 0m17.873s 00:10:09.542 sys 0m2.020s 00:10:09.542 13:19:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:09.542 ************************************ 00:10:09.542 END TEST raid_state_function_test 00:10:09.542 ************************************ 00:10:09.542 13:19:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.542 13:19:58 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:10:09.542 13:19:58 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:09.542 13:19:58 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:09.542 13:19:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:09.542 ************************************ 00:10:09.542 START TEST raid_state_function_test_sb 00:10:09.542 ************************************ 00:10:09.543 13:19:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 4 true 00:10:09.543 13:19:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:10:09.543 13:19:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:10:09.543 13:19:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:10:09.543 13:19:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:09.543 13:19:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:09.543 13:19:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:09.543 13:19:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:09.543 13:19:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:09.543 13:19:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:09.543 13:19:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:09.543 13:19:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:09.543 13:19:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:09.543 13:19:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:09.543 13:19:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:09.543 13:19:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:09.543 13:19:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:10:09.543 13:19:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:09.543 13:19:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:09.543 13:19:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:09.543 13:19:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:09.543 13:19:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:09.543 13:19:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:09.543 13:19:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:09.543 13:19:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:09.543 13:19:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:10:09.543 13:19:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:09.543 13:19:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:09.543 13:19:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:10:09.543 Process raid pid: 69971 00:10:09.543 13:19:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:10:09.543 13:19:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=69971 00:10:09.543 13:19:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 69971' 00:10:09.543 13:19:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:09.543 13:19:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 69971 00:10:09.543 13:19:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 69971 ']' 00:10:09.543 13:19:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:09.543 13:19:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:09.543 13:19:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:09.543 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:09.543 13:19:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:09.543 13:19:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.543 [2024-11-17 13:19:58.544342] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:10:09.543 [2024-11-17 13:19:58.544518] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:09.543 [2024-11-17 13:19:58.721664] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:09.807 [2024-11-17 13:19:58.842470] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:10.142 [2024-11-17 13:19:59.057611] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:10.142 [2024-11-17 13:19:59.057754] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:10.402 13:19:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:10.402 13:19:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:10:10.402 13:19:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:10.402 13:19:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.402 13:19:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.402 [2024-11-17 13:19:59.394603] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:10.402 [2024-11-17 13:19:59.394708] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:10.402 [2024-11-17 13:19:59.394740] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:10.402 [2024-11-17 13:19:59.394764] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:10.402 [2024-11-17 13:19:59.394783] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:10.402 [2024-11-17 13:19:59.394805] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:10.402 [2024-11-17 13:19:59.394823] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:10.402 [2024-11-17 13:19:59.394873] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:10.402 13:19:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.402 13:19:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:10.402 13:19:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:10.402 13:19:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:10.402 13:19:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:10.402 13:19:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:10.402 13:19:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:10.402 13:19:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:10.402 13:19:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:10.402 13:19:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:10.402 13:19:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:10.402 13:19:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:10.402 13:19:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:10.402 13:19:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.402 13:19:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.402 13:19:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.402 13:19:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:10.402 "name": "Existed_Raid", 00:10:10.402 "uuid": "1df27960-3980-4d2d-9512-1e9757acba62", 00:10:10.402 "strip_size_kb": 64, 00:10:10.402 "state": "configuring", 00:10:10.402 "raid_level": "raid0", 00:10:10.402 "superblock": true, 00:10:10.402 "num_base_bdevs": 4, 00:10:10.402 "num_base_bdevs_discovered": 0, 00:10:10.402 "num_base_bdevs_operational": 4, 00:10:10.402 "base_bdevs_list": [ 00:10:10.402 { 00:10:10.402 "name": "BaseBdev1", 00:10:10.402 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:10.402 "is_configured": false, 00:10:10.402 "data_offset": 0, 00:10:10.402 "data_size": 0 00:10:10.402 }, 00:10:10.402 { 00:10:10.402 "name": "BaseBdev2", 00:10:10.402 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:10.402 "is_configured": false, 00:10:10.402 "data_offset": 0, 00:10:10.402 "data_size": 0 00:10:10.402 }, 00:10:10.402 { 00:10:10.402 "name": "BaseBdev3", 00:10:10.402 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:10.402 "is_configured": false, 00:10:10.402 "data_offset": 0, 00:10:10.402 "data_size": 0 00:10:10.402 }, 00:10:10.402 { 00:10:10.402 "name": "BaseBdev4", 00:10:10.402 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:10.402 "is_configured": false, 00:10:10.402 "data_offset": 0, 00:10:10.402 "data_size": 0 00:10:10.402 } 00:10:10.402 ] 00:10:10.402 }' 00:10:10.402 13:19:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:10.402 13:19:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.662 13:19:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:10.662 13:19:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.662 13:19:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.662 [2024-11-17 13:19:59.785870] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:10.662 [2024-11-17 13:19:59.785911] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:10.662 13:19:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.662 13:19:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:10.662 13:19:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.662 13:19:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.662 [2024-11-17 13:19:59.797850] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:10.662 [2024-11-17 13:19:59.797893] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:10.662 [2024-11-17 13:19:59.797902] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:10.662 [2024-11-17 13:19:59.797911] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:10.662 [2024-11-17 13:19:59.797917] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:10.662 [2024-11-17 13:19:59.797926] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:10.662 [2024-11-17 13:19:59.797932] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:10.662 [2024-11-17 13:19:59.797941] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:10.662 13:19:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.662 13:19:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:10.662 13:19:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.662 13:19:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.662 [2024-11-17 13:19:59.846763] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:10.662 BaseBdev1 00:10:10.662 13:19:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.662 13:19:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:10.662 13:19:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:10.662 13:19:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:10.662 13:19:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:10.662 13:19:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:10.662 13:19:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:10.662 13:19:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:10.662 13:19:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.662 13:19:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.662 13:19:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.662 13:19:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:10.662 13:19:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.662 13:19:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.662 [ 00:10:10.662 { 00:10:10.662 "name": "BaseBdev1", 00:10:10.662 "aliases": [ 00:10:10.662 "281549c7-870e-420e-9d27-056d78ccea60" 00:10:10.662 ], 00:10:10.662 "product_name": "Malloc disk", 00:10:10.662 "block_size": 512, 00:10:10.662 "num_blocks": 65536, 00:10:10.662 "uuid": "281549c7-870e-420e-9d27-056d78ccea60", 00:10:10.662 "assigned_rate_limits": { 00:10:10.662 "rw_ios_per_sec": 0, 00:10:10.662 "rw_mbytes_per_sec": 0, 00:10:10.662 "r_mbytes_per_sec": 0, 00:10:10.662 "w_mbytes_per_sec": 0 00:10:10.662 }, 00:10:10.662 "claimed": true, 00:10:10.662 "claim_type": "exclusive_write", 00:10:10.662 "zoned": false, 00:10:10.662 "supported_io_types": { 00:10:10.662 "read": true, 00:10:10.662 "write": true, 00:10:10.662 "unmap": true, 00:10:10.662 "flush": true, 00:10:10.662 "reset": true, 00:10:10.662 "nvme_admin": false, 00:10:10.662 "nvme_io": false, 00:10:10.662 "nvme_io_md": false, 00:10:10.662 "write_zeroes": true, 00:10:10.662 "zcopy": true, 00:10:10.663 "get_zone_info": false, 00:10:10.663 "zone_management": false, 00:10:10.663 "zone_append": false, 00:10:10.663 "compare": false, 00:10:10.663 "compare_and_write": false, 00:10:10.663 "abort": true, 00:10:10.663 "seek_hole": false, 00:10:10.663 "seek_data": false, 00:10:10.663 "copy": true, 00:10:10.663 "nvme_iov_md": false 00:10:10.663 }, 00:10:10.663 "memory_domains": [ 00:10:10.663 { 00:10:10.663 "dma_device_id": "system", 00:10:10.663 "dma_device_type": 1 00:10:10.663 }, 00:10:10.663 { 00:10:10.663 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:10.663 "dma_device_type": 2 00:10:10.663 } 00:10:10.663 ], 00:10:10.663 "driver_specific": {} 00:10:10.663 } 00:10:10.663 ] 00:10:10.663 13:19:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.663 13:19:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:10.663 13:19:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:10.663 13:19:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:10.922 13:19:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:10.922 13:19:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:10.922 13:19:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:10.922 13:19:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:10.922 13:19:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:10.922 13:19:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:10.922 13:19:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:10.922 13:19:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:10.922 13:19:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:10.922 13:19:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:10.922 13:19:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.922 13:19:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.922 13:19:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.922 13:19:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:10.922 "name": "Existed_Raid", 00:10:10.922 "uuid": "a3e3a9ba-1cc5-48f6-bf52-53b3586256fe", 00:10:10.922 "strip_size_kb": 64, 00:10:10.922 "state": "configuring", 00:10:10.922 "raid_level": "raid0", 00:10:10.922 "superblock": true, 00:10:10.922 "num_base_bdevs": 4, 00:10:10.922 "num_base_bdevs_discovered": 1, 00:10:10.922 "num_base_bdevs_operational": 4, 00:10:10.922 "base_bdevs_list": [ 00:10:10.922 { 00:10:10.923 "name": "BaseBdev1", 00:10:10.923 "uuid": "281549c7-870e-420e-9d27-056d78ccea60", 00:10:10.923 "is_configured": true, 00:10:10.923 "data_offset": 2048, 00:10:10.923 "data_size": 63488 00:10:10.923 }, 00:10:10.923 { 00:10:10.923 "name": "BaseBdev2", 00:10:10.923 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:10.923 "is_configured": false, 00:10:10.923 "data_offset": 0, 00:10:10.923 "data_size": 0 00:10:10.923 }, 00:10:10.923 { 00:10:10.923 "name": "BaseBdev3", 00:10:10.923 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:10.923 "is_configured": false, 00:10:10.923 "data_offset": 0, 00:10:10.923 "data_size": 0 00:10:10.923 }, 00:10:10.923 { 00:10:10.923 "name": "BaseBdev4", 00:10:10.923 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:10.923 "is_configured": false, 00:10:10.923 "data_offset": 0, 00:10:10.923 "data_size": 0 00:10:10.923 } 00:10:10.923 ] 00:10:10.923 }' 00:10:10.923 13:19:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:10.923 13:19:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.182 13:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:11.183 13:20:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.183 13:20:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.183 [2024-11-17 13:20:00.345996] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:11.183 [2024-11-17 13:20:00.346131] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:11.183 13:20:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.183 13:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:11.183 13:20:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.183 13:20:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.183 [2024-11-17 13:20:00.358051] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:11.183 [2024-11-17 13:20:00.360009] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:11.183 [2024-11-17 13:20:00.360097] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:11.183 [2024-11-17 13:20:00.360111] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:11.183 [2024-11-17 13:20:00.360123] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:11.183 [2024-11-17 13:20:00.360129] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:11.183 [2024-11-17 13:20:00.360138] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:11.183 13:20:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.183 13:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:11.183 13:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:11.183 13:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:11.183 13:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:11.183 13:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:11.183 13:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:11.183 13:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:11.183 13:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:11.183 13:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:11.183 13:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:11.183 13:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:11.183 13:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:11.183 13:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:11.183 13:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:11.183 13:20:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.183 13:20:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.183 13:20:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.442 13:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:11.442 "name": "Existed_Raid", 00:10:11.442 "uuid": "f2e49f6d-b282-435f-ad08-a77647f0742c", 00:10:11.442 "strip_size_kb": 64, 00:10:11.442 "state": "configuring", 00:10:11.442 "raid_level": "raid0", 00:10:11.442 "superblock": true, 00:10:11.442 "num_base_bdevs": 4, 00:10:11.442 "num_base_bdevs_discovered": 1, 00:10:11.442 "num_base_bdevs_operational": 4, 00:10:11.442 "base_bdevs_list": [ 00:10:11.442 { 00:10:11.442 "name": "BaseBdev1", 00:10:11.442 "uuid": "281549c7-870e-420e-9d27-056d78ccea60", 00:10:11.442 "is_configured": true, 00:10:11.442 "data_offset": 2048, 00:10:11.442 "data_size": 63488 00:10:11.442 }, 00:10:11.442 { 00:10:11.442 "name": "BaseBdev2", 00:10:11.442 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:11.442 "is_configured": false, 00:10:11.442 "data_offset": 0, 00:10:11.442 "data_size": 0 00:10:11.442 }, 00:10:11.442 { 00:10:11.442 "name": "BaseBdev3", 00:10:11.442 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:11.442 "is_configured": false, 00:10:11.442 "data_offset": 0, 00:10:11.442 "data_size": 0 00:10:11.442 }, 00:10:11.442 { 00:10:11.442 "name": "BaseBdev4", 00:10:11.442 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:11.442 "is_configured": false, 00:10:11.442 "data_offset": 0, 00:10:11.442 "data_size": 0 00:10:11.442 } 00:10:11.442 ] 00:10:11.442 }' 00:10:11.442 13:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:11.443 13:20:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.702 13:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:11.702 13:20:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.702 13:20:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.702 [2024-11-17 13:20:00.906765] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:11.702 BaseBdev2 00:10:11.702 13:20:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.702 13:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:11.702 13:20:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:11.702 13:20:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:11.702 13:20:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:11.702 13:20:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:11.702 13:20:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:11.702 13:20:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:11.702 13:20:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.702 13:20:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.702 13:20:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.702 13:20:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:11.702 13:20:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.702 13:20:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.961 [ 00:10:11.961 { 00:10:11.961 "name": "BaseBdev2", 00:10:11.961 "aliases": [ 00:10:11.961 "1f8a5a1a-e1c7-4354-ae72-8a656406a399" 00:10:11.961 ], 00:10:11.961 "product_name": "Malloc disk", 00:10:11.961 "block_size": 512, 00:10:11.961 "num_blocks": 65536, 00:10:11.961 "uuid": "1f8a5a1a-e1c7-4354-ae72-8a656406a399", 00:10:11.961 "assigned_rate_limits": { 00:10:11.961 "rw_ios_per_sec": 0, 00:10:11.961 "rw_mbytes_per_sec": 0, 00:10:11.961 "r_mbytes_per_sec": 0, 00:10:11.961 "w_mbytes_per_sec": 0 00:10:11.961 }, 00:10:11.961 "claimed": true, 00:10:11.961 "claim_type": "exclusive_write", 00:10:11.961 "zoned": false, 00:10:11.961 "supported_io_types": { 00:10:11.961 "read": true, 00:10:11.961 "write": true, 00:10:11.961 "unmap": true, 00:10:11.961 "flush": true, 00:10:11.961 "reset": true, 00:10:11.961 "nvme_admin": false, 00:10:11.961 "nvme_io": false, 00:10:11.961 "nvme_io_md": false, 00:10:11.962 "write_zeroes": true, 00:10:11.962 "zcopy": true, 00:10:11.962 "get_zone_info": false, 00:10:11.962 "zone_management": false, 00:10:11.962 "zone_append": false, 00:10:11.962 "compare": false, 00:10:11.962 "compare_and_write": false, 00:10:11.962 "abort": true, 00:10:11.962 "seek_hole": false, 00:10:11.962 "seek_data": false, 00:10:11.962 "copy": true, 00:10:11.962 "nvme_iov_md": false 00:10:11.962 }, 00:10:11.962 "memory_domains": [ 00:10:11.962 { 00:10:11.962 "dma_device_id": "system", 00:10:11.962 "dma_device_type": 1 00:10:11.962 }, 00:10:11.962 { 00:10:11.962 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:11.962 "dma_device_type": 2 00:10:11.962 } 00:10:11.962 ], 00:10:11.962 "driver_specific": {} 00:10:11.962 } 00:10:11.962 ] 00:10:11.962 13:20:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.962 13:20:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:11.962 13:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:11.962 13:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:11.962 13:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:11.962 13:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:11.962 13:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:11.962 13:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:11.962 13:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:11.962 13:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:11.962 13:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:11.962 13:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:11.962 13:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:11.962 13:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:11.962 13:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:11.962 13:20:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.962 13:20:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.962 13:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:11.962 13:20:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.962 13:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:11.962 "name": "Existed_Raid", 00:10:11.962 "uuid": "f2e49f6d-b282-435f-ad08-a77647f0742c", 00:10:11.962 "strip_size_kb": 64, 00:10:11.962 "state": "configuring", 00:10:11.962 "raid_level": "raid0", 00:10:11.962 "superblock": true, 00:10:11.962 "num_base_bdevs": 4, 00:10:11.962 "num_base_bdevs_discovered": 2, 00:10:11.962 "num_base_bdevs_operational": 4, 00:10:11.962 "base_bdevs_list": [ 00:10:11.962 { 00:10:11.962 "name": "BaseBdev1", 00:10:11.962 "uuid": "281549c7-870e-420e-9d27-056d78ccea60", 00:10:11.962 "is_configured": true, 00:10:11.962 "data_offset": 2048, 00:10:11.962 "data_size": 63488 00:10:11.962 }, 00:10:11.962 { 00:10:11.962 "name": "BaseBdev2", 00:10:11.962 "uuid": "1f8a5a1a-e1c7-4354-ae72-8a656406a399", 00:10:11.962 "is_configured": true, 00:10:11.962 "data_offset": 2048, 00:10:11.962 "data_size": 63488 00:10:11.962 }, 00:10:11.962 { 00:10:11.962 "name": "BaseBdev3", 00:10:11.962 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:11.962 "is_configured": false, 00:10:11.962 "data_offset": 0, 00:10:11.962 "data_size": 0 00:10:11.962 }, 00:10:11.962 { 00:10:11.962 "name": "BaseBdev4", 00:10:11.962 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:11.962 "is_configured": false, 00:10:11.962 "data_offset": 0, 00:10:11.962 "data_size": 0 00:10:11.962 } 00:10:11.962 ] 00:10:11.962 }' 00:10:11.962 13:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:11.962 13:20:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.220 13:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:12.220 13:20:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.220 13:20:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.479 [2024-11-17 13:20:01.485452] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:12.479 BaseBdev3 00:10:12.479 13:20:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.479 13:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:12.479 13:20:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:12.479 13:20:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:12.479 13:20:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:12.479 13:20:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:12.479 13:20:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:12.479 13:20:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:12.479 13:20:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.479 13:20:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.479 13:20:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.479 13:20:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:12.479 13:20:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.479 13:20:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.479 [ 00:10:12.479 { 00:10:12.479 "name": "BaseBdev3", 00:10:12.479 "aliases": [ 00:10:12.479 "e6d2e93f-3377-428e-8d49-dd0d9f9f0ac2" 00:10:12.479 ], 00:10:12.479 "product_name": "Malloc disk", 00:10:12.479 "block_size": 512, 00:10:12.479 "num_blocks": 65536, 00:10:12.479 "uuid": "e6d2e93f-3377-428e-8d49-dd0d9f9f0ac2", 00:10:12.479 "assigned_rate_limits": { 00:10:12.479 "rw_ios_per_sec": 0, 00:10:12.479 "rw_mbytes_per_sec": 0, 00:10:12.479 "r_mbytes_per_sec": 0, 00:10:12.479 "w_mbytes_per_sec": 0 00:10:12.479 }, 00:10:12.479 "claimed": true, 00:10:12.479 "claim_type": "exclusive_write", 00:10:12.479 "zoned": false, 00:10:12.479 "supported_io_types": { 00:10:12.479 "read": true, 00:10:12.479 "write": true, 00:10:12.479 "unmap": true, 00:10:12.479 "flush": true, 00:10:12.479 "reset": true, 00:10:12.479 "nvme_admin": false, 00:10:12.479 "nvme_io": false, 00:10:12.479 "nvme_io_md": false, 00:10:12.479 "write_zeroes": true, 00:10:12.479 "zcopy": true, 00:10:12.479 "get_zone_info": false, 00:10:12.479 "zone_management": false, 00:10:12.479 "zone_append": false, 00:10:12.479 "compare": false, 00:10:12.479 "compare_and_write": false, 00:10:12.479 "abort": true, 00:10:12.479 "seek_hole": false, 00:10:12.479 "seek_data": false, 00:10:12.479 "copy": true, 00:10:12.479 "nvme_iov_md": false 00:10:12.479 }, 00:10:12.479 "memory_domains": [ 00:10:12.479 { 00:10:12.479 "dma_device_id": "system", 00:10:12.479 "dma_device_type": 1 00:10:12.479 }, 00:10:12.479 { 00:10:12.479 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:12.479 "dma_device_type": 2 00:10:12.479 } 00:10:12.479 ], 00:10:12.479 "driver_specific": {} 00:10:12.479 } 00:10:12.479 ] 00:10:12.479 13:20:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.480 13:20:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:12.480 13:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:12.480 13:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:12.480 13:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:12.480 13:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:12.480 13:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:12.480 13:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:12.480 13:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:12.480 13:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:12.480 13:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:12.480 13:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:12.480 13:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:12.480 13:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:12.480 13:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.480 13:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:12.480 13:20:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.480 13:20:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.480 13:20:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.480 13:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:12.480 "name": "Existed_Raid", 00:10:12.480 "uuid": "f2e49f6d-b282-435f-ad08-a77647f0742c", 00:10:12.480 "strip_size_kb": 64, 00:10:12.480 "state": "configuring", 00:10:12.480 "raid_level": "raid0", 00:10:12.480 "superblock": true, 00:10:12.480 "num_base_bdevs": 4, 00:10:12.480 "num_base_bdevs_discovered": 3, 00:10:12.480 "num_base_bdevs_operational": 4, 00:10:12.480 "base_bdevs_list": [ 00:10:12.480 { 00:10:12.480 "name": "BaseBdev1", 00:10:12.480 "uuid": "281549c7-870e-420e-9d27-056d78ccea60", 00:10:12.480 "is_configured": true, 00:10:12.480 "data_offset": 2048, 00:10:12.480 "data_size": 63488 00:10:12.480 }, 00:10:12.480 { 00:10:12.480 "name": "BaseBdev2", 00:10:12.480 "uuid": "1f8a5a1a-e1c7-4354-ae72-8a656406a399", 00:10:12.480 "is_configured": true, 00:10:12.480 "data_offset": 2048, 00:10:12.480 "data_size": 63488 00:10:12.480 }, 00:10:12.480 { 00:10:12.480 "name": "BaseBdev3", 00:10:12.480 "uuid": "e6d2e93f-3377-428e-8d49-dd0d9f9f0ac2", 00:10:12.480 "is_configured": true, 00:10:12.480 "data_offset": 2048, 00:10:12.480 "data_size": 63488 00:10:12.480 }, 00:10:12.480 { 00:10:12.480 "name": "BaseBdev4", 00:10:12.480 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:12.480 "is_configured": false, 00:10:12.480 "data_offset": 0, 00:10:12.480 "data_size": 0 00:10:12.480 } 00:10:12.480 ] 00:10:12.480 }' 00:10:12.480 13:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:12.480 13:20:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.048 13:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:13.048 13:20:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.048 13:20:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.048 [2024-11-17 13:20:02.054698] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:13.048 [2024-11-17 13:20:02.055092] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:13.048 [2024-11-17 13:20:02.055112] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:13.048 [2024-11-17 13:20:02.055419] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:13.048 [2024-11-17 13:20:02.055589] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:13.048 [2024-11-17 13:20:02.055603] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:13.048 [2024-11-17 13:20:02.055745] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:13.048 BaseBdev4 00:10:13.048 13:20:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.048 13:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:13.048 13:20:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:13.048 13:20:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:13.048 13:20:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:13.048 13:20:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:13.048 13:20:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:13.048 13:20:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:13.048 13:20:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.048 13:20:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.048 13:20:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.048 13:20:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:13.048 13:20:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.048 13:20:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.048 [ 00:10:13.048 { 00:10:13.048 "name": "BaseBdev4", 00:10:13.048 "aliases": [ 00:10:13.048 "b9bb1bc0-bf04-4266-9212-71f1116a982d" 00:10:13.048 ], 00:10:13.048 "product_name": "Malloc disk", 00:10:13.048 "block_size": 512, 00:10:13.048 "num_blocks": 65536, 00:10:13.048 "uuid": "b9bb1bc0-bf04-4266-9212-71f1116a982d", 00:10:13.048 "assigned_rate_limits": { 00:10:13.048 "rw_ios_per_sec": 0, 00:10:13.048 "rw_mbytes_per_sec": 0, 00:10:13.048 "r_mbytes_per_sec": 0, 00:10:13.048 "w_mbytes_per_sec": 0 00:10:13.048 }, 00:10:13.048 "claimed": true, 00:10:13.048 "claim_type": "exclusive_write", 00:10:13.048 "zoned": false, 00:10:13.048 "supported_io_types": { 00:10:13.048 "read": true, 00:10:13.048 "write": true, 00:10:13.048 "unmap": true, 00:10:13.048 "flush": true, 00:10:13.048 "reset": true, 00:10:13.048 "nvme_admin": false, 00:10:13.048 "nvme_io": false, 00:10:13.048 "nvme_io_md": false, 00:10:13.048 "write_zeroes": true, 00:10:13.048 "zcopy": true, 00:10:13.048 "get_zone_info": false, 00:10:13.048 "zone_management": false, 00:10:13.048 "zone_append": false, 00:10:13.048 "compare": false, 00:10:13.048 "compare_and_write": false, 00:10:13.048 "abort": true, 00:10:13.048 "seek_hole": false, 00:10:13.048 "seek_data": false, 00:10:13.048 "copy": true, 00:10:13.048 "nvme_iov_md": false 00:10:13.048 }, 00:10:13.048 "memory_domains": [ 00:10:13.048 { 00:10:13.048 "dma_device_id": "system", 00:10:13.048 "dma_device_type": 1 00:10:13.048 }, 00:10:13.048 { 00:10:13.048 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:13.048 "dma_device_type": 2 00:10:13.048 } 00:10:13.048 ], 00:10:13.048 "driver_specific": {} 00:10:13.048 } 00:10:13.048 ] 00:10:13.048 13:20:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.048 13:20:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:13.048 13:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:13.049 13:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:13.049 13:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:10:13.049 13:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:13.049 13:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:13.049 13:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:13.049 13:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:13.049 13:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:13.049 13:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:13.049 13:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:13.049 13:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:13.049 13:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:13.049 13:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:13.049 13:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.049 13:20:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.049 13:20:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.049 13:20:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.049 13:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:13.049 "name": "Existed_Raid", 00:10:13.049 "uuid": "f2e49f6d-b282-435f-ad08-a77647f0742c", 00:10:13.049 "strip_size_kb": 64, 00:10:13.049 "state": "online", 00:10:13.049 "raid_level": "raid0", 00:10:13.049 "superblock": true, 00:10:13.049 "num_base_bdevs": 4, 00:10:13.049 "num_base_bdevs_discovered": 4, 00:10:13.049 "num_base_bdevs_operational": 4, 00:10:13.049 "base_bdevs_list": [ 00:10:13.049 { 00:10:13.049 "name": "BaseBdev1", 00:10:13.049 "uuid": "281549c7-870e-420e-9d27-056d78ccea60", 00:10:13.049 "is_configured": true, 00:10:13.049 "data_offset": 2048, 00:10:13.049 "data_size": 63488 00:10:13.049 }, 00:10:13.049 { 00:10:13.049 "name": "BaseBdev2", 00:10:13.049 "uuid": "1f8a5a1a-e1c7-4354-ae72-8a656406a399", 00:10:13.049 "is_configured": true, 00:10:13.049 "data_offset": 2048, 00:10:13.049 "data_size": 63488 00:10:13.049 }, 00:10:13.049 { 00:10:13.049 "name": "BaseBdev3", 00:10:13.049 "uuid": "e6d2e93f-3377-428e-8d49-dd0d9f9f0ac2", 00:10:13.049 "is_configured": true, 00:10:13.049 "data_offset": 2048, 00:10:13.049 "data_size": 63488 00:10:13.049 }, 00:10:13.049 { 00:10:13.049 "name": "BaseBdev4", 00:10:13.049 "uuid": "b9bb1bc0-bf04-4266-9212-71f1116a982d", 00:10:13.049 "is_configured": true, 00:10:13.049 "data_offset": 2048, 00:10:13.049 "data_size": 63488 00:10:13.049 } 00:10:13.049 ] 00:10:13.049 }' 00:10:13.049 13:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:13.049 13:20:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.618 13:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:13.618 13:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:13.618 13:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:13.618 13:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:13.618 13:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:13.618 13:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:13.618 13:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:13.618 13:20:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.618 13:20:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.618 13:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:13.618 [2024-11-17 13:20:02.598151] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:13.618 13:20:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.618 13:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:13.618 "name": "Existed_Raid", 00:10:13.618 "aliases": [ 00:10:13.618 "f2e49f6d-b282-435f-ad08-a77647f0742c" 00:10:13.618 ], 00:10:13.618 "product_name": "Raid Volume", 00:10:13.618 "block_size": 512, 00:10:13.618 "num_blocks": 253952, 00:10:13.618 "uuid": "f2e49f6d-b282-435f-ad08-a77647f0742c", 00:10:13.618 "assigned_rate_limits": { 00:10:13.618 "rw_ios_per_sec": 0, 00:10:13.618 "rw_mbytes_per_sec": 0, 00:10:13.618 "r_mbytes_per_sec": 0, 00:10:13.618 "w_mbytes_per_sec": 0 00:10:13.618 }, 00:10:13.618 "claimed": false, 00:10:13.618 "zoned": false, 00:10:13.618 "supported_io_types": { 00:10:13.618 "read": true, 00:10:13.618 "write": true, 00:10:13.618 "unmap": true, 00:10:13.618 "flush": true, 00:10:13.618 "reset": true, 00:10:13.618 "nvme_admin": false, 00:10:13.618 "nvme_io": false, 00:10:13.618 "nvme_io_md": false, 00:10:13.618 "write_zeroes": true, 00:10:13.618 "zcopy": false, 00:10:13.618 "get_zone_info": false, 00:10:13.618 "zone_management": false, 00:10:13.618 "zone_append": false, 00:10:13.618 "compare": false, 00:10:13.618 "compare_and_write": false, 00:10:13.618 "abort": false, 00:10:13.618 "seek_hole": false, 00:10:13.618 "seek_data": false, 00:10:13.618 "copy": false, 00:10:13.618 "nvme_iov_md": false 00:10:13.618 }, 00:10:13.618 "memory_domains": [ 00:10:13.618 { 00:10:13.618 "dma_device_id": "system", 00:10:13.618 "dma_device_type": 1 00:10:13.618 }, 00:10:13.618 { 00:10:13.618 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:13.618 "dma_device_type": 2 00:10:13.618 }, 00:10:13.618 { 00:10:13.618 "dma_device_id": "system", 00:10:13.618 "dma_device_type": 1 00:10:13.618 }, 00:10:13.618 { 00:10:13.618 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:13.618 "dma_device_type": 2 00:10:13.618 }, 00:10:13.618 { 00:10:13.619 "dma_device_id": "system", 00:10:13.619 "dma_device_type": 1 00:10:13.619 }, 00:10:13.619 { 00:10:13.619 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:13.619 "dma_device_type": 2 00:10:13.619 }, 00:10:13.619 { 00:10:13.619 "dma_device_id": "system", 00:10:13.619 "dma_device_type": 1 00:10:13.619 }, 00:10:13.619 { 00:10:13.619 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:13.619 "dma_device_type": 2 00:10:13.619 } 00:10:13.619 ], 00:10:13.619 "driver_specific": { 00:10:13.619 "raid": { 00:10:13.619 "uuid": "f2e49f6d-b282-435f-ad08-a77647f0742c", 00:10:13.619 "strip_size_kb": 64, 00:10:13.619 "state": "online", 00:10:13.619 "raid_level": "raid0", 00:10:13.619 "superblock": true, 00:10:13.619 "num_base_bdevs": 4, 00:10:13.619 "num_base_bdevs_discovered": 4, 00:10:13.619 "num_base_bdevs_operational": 4, 00:10:13.619 "base_bdevs_list": [ 00:10:13.619 { 00:10:13.619 "name": "BaseBdev1", 00:10:13.619 "uuid": "281549c7-870e-420e-9d27-056d78ccea60", 00:10:13.619 "is_configured": true, 00:10:13.619 "data_offset": 2048, 00:10:13.619 "data_size": 63488 00:10:13.619 }, 00:10:13.619 { 00:10:13.619 "name": "BaseBdev2", 00:10:13.619 "uuid": "1f8a5a1a-e1c7-4354-ae72-8a656406a399", 00:10:13.619 "is_configured": true, 00:10:13.619 "data_offset": 2048, 00:10:13.619 "data_size": 63488 00:10:13.619 }, 00:10:13.619 { 00:10:13.619 "name": "BaseBdev3", 00:10:13.619 "uuid": "e6d2e93f-3377-428e-8d49-dd0d9f9f0ac2", 00:10:13.619 "is_configured": true, 00:10:13.619 "data_offset": 2048, 00:10:13.619 "data_size": 63488 00:10:13.619 }, 00:10:13.619 { 00:10:13.619 "name": "BaseBdev4", 00:10:13.619 "uuid": "b9bb1bc0-bf04-4266-9212-71f1116a982d", 00:10:13.619 "is_configured": true, 00:10:13.619 "data_offset": 2048, 00:10:13.619 "data_size": 63488 00:10:13.619 } 00:10:13.619 ] 00:10:13.619 } 00:10:13.619 } 00:10:13.619 }' 00:10:13.619 13:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:13.619 13:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:13.619 BaseBdev2 00:10:13.619 BaseBdev3 00:10:13.619 BaseBdev4' 00:10:13.619 13:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:13.619 13:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:13.619 13:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:13.619 13:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:13.619 13:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:13.619 13:20:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.619 13:20:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.619 13:20:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.619 13:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:13.619 13:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:13.619 13:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:13.619 13:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:13.619 13:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:13.619 13:20:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.619 13:20:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.619 13:20:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.619 13:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:13.619 13:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:13.619 13:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:13.619 13:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:13.619 13:20:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.619 13:20:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.619 13:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:13.619 13:20:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.879 13:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:13.879 13:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:13.879 13:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:13.879 13:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:13.879 13:20:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.879 13:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:13.879 13:20:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.879 13:20:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.879 13:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:13.879 13:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:13.879 13:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:13.879 13:20:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.879 13:20:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.879 [2024-11-17 13:20:02.901336] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:13.879 [2024-11-17 13:20:02.901415] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:13.879 [2024-11-17 13:20:02.901485] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:13.879 13:20:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.879 13:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:13.879 13:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:10:13.879 13:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:13.879 13:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:10:13.879 13:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:13.879 13:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:10:13.879 13:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:13.880 13:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:13.880 13:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:13.880 13:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:13.880 13:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:13.880 13:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:13.880 13:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:13.880 13:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:13.880 13:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:13.880 13:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.880 13:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:13.880 13:20:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.880 13:20:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.880 13:20:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.880 13:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:13.880 "name": "Existed_Raid", 00:10:13.880 "uuid": "f2e49f6d-b282-435f-ad08-a77647f0742c", 00:10:13.880 "strip_size_kb": 64, 00:10:13.880 "state": "offline", 00:10:13.880 "raid_level": "raid0", 00:10:13.880 "superblock": true, 00:10:13.880 "num_base_bdevs": 4, 00:10:13.880 "num_base_bdevs_discovered": 3, 00:10:13.880 "num_base_bdevs_operational": 3, 00:10:13.880 "base_bdevs_list": [ 00:10:13.880 { 00:10:13.880 "name": null, 00:10:13.880 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:13.880 "is_configured": false, 00:10:13.880 "data_offset": 0, 00:10:13.880 "data_size": 63488 00:10:13.880 }, 00:10:13.880 { 00:10:13.880 "name": "BaseBdev2", 00:10:13.880 "uuid": "1f8a5a1a-e1c7-4354-ae72-8a656406a399", 00:10:13.880 "is_configured": true, 00:10:13.880 "data_offset": 2048, 00:10:13.880 "data_size": 63488 00:10:13.880 }, 00:10:13.880 { 00:10:13.880 "name": "BaseBdev3", 00:10:13.880 "uuid": "e6d2e93f-3377-428e-8d49-dd0d9f9f0ac2", 00:10:13.880 "is_configured": true, 00:10:13.880 "data_offset": 2048, 00:10:13.880 "data_size": 63488 00:10:13.880 }, 00:10:13.880 { 00:10:13.880 "name": "BaseBdev4", 00:10:13.880 "uuid": "b9bb1bc0-bf04-4266-9212-71f1116a982d", 00:10:13.880 "is_configured": true, 00:10:13.880 "data_offset": 2048, 00:10:13.880 "data_size": 63488 00:10:13.880 } 00:10:13.880 ] 00:10:13.880 }' 00:10:13.880 13:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:13.880 13:20:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.450 13:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:14.450 13:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:14.450 13:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:14.450 13:20:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.450 13:20:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.450 13:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:14.450 13:20:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.450 13:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:14.450 13:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:14.450 13:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:14.450 13:20:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.450 13:20:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.450 [2024-11-17 13:20:03.455337] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:14.450 13:20:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.450 13:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:14.450 13:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:14.450 13:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:14.450 13:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:14.450 13:20:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.450 13:20:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.450 13:20:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.450 13:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:14.450 13:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:14.450 13:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:14.450 13:20:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.450 13:20:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.450 [2024-11-17 13:20:03.605895] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:14.710 13:20:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.710 13:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:14.710 13:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:14.710 13:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:14.710 13:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:14.710 13:20:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.710 13:20:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.710 13:20:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.710 13:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:14.710 13:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:14.710 13:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:14.710 13:20:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.710 13:20:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.710 [2024-11-17 13:20:03.750182] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:14.710 [2024-11-17 13:20:03.750322] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:14.710 13:20:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.710 13:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:14.710 13:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:14.710 13:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:14.710 13:20:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.710 13:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:14.710 13:20:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.710 13:20:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.710 13:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:14.710 13:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:14.710 13:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:14.710 13:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:14.710 13:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:14.710 13:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:14.710 13:20:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.710 13:20:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.970 BaseBdev2 00:10:14.970 13:20:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.970 13:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:14.970 13:20:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:14.970 13:20:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:14.970 13:20:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:14.970 13:20:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:14.970 13:20:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:14.970 13:20:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:14.970 13:20:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.970 13:20:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.970 13:20:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.970 13:20:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:14.970 13:20:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.970 13:20:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.970 [ 00:10:14.970 { 00:10:14.970 "name": "BaseBdev2", 00:10:14.970 "aliases": [ 00:10:14.970 "2b8a2c2f-c264-4b7f-bbce-517e3fd475c4" 00:10:14.970 ], 00:10:14.970 "product_name": "Malloc disk", 00:10:14.970 "block_size": 512, 00:10:14.970 "num_blocks": 65536, 00:10:14.970 "uuid": "2b8a2c2f-c264-4b7f-bbce-517e3fd475c4", 00:10:14.970 "assigned_rate_limits": { 00:10:14.970 "rw_ios_per_sec": 0, 00:10:14.970 "rw_mbytes_per_sec": 0, 00:10:14.970 "r_mbytes_per_sec": 0, 00:10:14.970 "w_mbytes_per_sec": 0 00:10:14.970 }, 00:10:14.970 "claimed": false, 00:10:14.970 "zoned": false, 00:10:14.970 "supported_io_types": { 00:10:14.970 "read": true, 00:10:14.970 "write": true, 00:10:14.970 "unmap": true, 00:10:14.970 "flush": true, 00:10:14.970 "reset": true, 00:10:14.970 "nvme_admin": false, 00:10:14.970 "nvme_io": false, 00:10:14.970 "nvme_io_md": false, 00:10:14.970 "write_zeroes": true, 00:10:14.970 "zcopy": true, 00:10:14.970 "get_zone_info": false, 00:10:14.970 "zone_management": false, 00:10:14.970 "zone_append": false, 00:10:14.970 "compare": false, 00:10:14.970 "compare_and_write": false, 00:10:14.970 "abort": true, 00:10:14.970 "seek_hole": false, 00:10:14.970 "seek_data": false, 00:10:14.970 "copy": true, 00:10:14.970 "nvme_iov_md": false 00:10:14.970 }, 00:10:14.970 "memory_domains": [ 00:10:14.970 { 00:10:14.970 "dma_device_id": "system", 00:10:14.970 "dma_device_type": 1 00:10:14.970 }, 00:10:14.970 { 00:10:14.970 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:14.970 "dma_device_type": 2 00:10:14.970 } 00:10:14.970 ], 00:10:14.970 "driver_specific": {} 00:10:14.970 } 00:10:14.970 ] 00:10:14.970 13:20:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.970 13:20:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:14.970 13:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:14.970 13:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:14.970 13:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:14.970 13:20:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.970 13:20:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.970 BaseBdev3 00:10:14.970 13:20:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.970 13:20:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:14.970 13:20:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:14.970 13:20:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:14.970 13:20:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:14.970 13:20:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:14.970 13:20:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:14.970 13:20:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:14.970 13:20:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.970 13:20:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.970 13:20:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.970 13:20:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:14.970 13:20:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.970 13:20:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.970 [ 00:10:14.970 { 00:10:14.970 "name": "BaseBdev3", 00:10:14.970 "aliases": [ 00:10:14.970 "37056e1e-34e8-4d11-8ad1-501bf5c8bd28" 00:10:14.970 ], 00:10:14.970 "product_name": "Malloc disk", 00:10:14.970 "block_size": 512, 00:10:14.970 "num_blocks": 65536, 00:10:14.970 "uuid": "37056e1e-34e8-4d11-8ad1-501bf5c8bd28", 00:10:14.970 "assigned_rate_limits": { 00:10:14.970 "rw_ios_per_sec": 0, 00:10:14.970 "rw_mbytes_per_sec": 0, 00:10:14.970 "r_mbytes_per_sec": 0, 00:10:14.970 "w_mbytes_per_sec": 0 00:10:14.970 }, 00:10:14.970 "claimed": false, 00:10:14.970 "zoned": false, 00:10:14.970 "supported_io_types": { 00:10:14.970 "read": true, 00:10:14.970 "write": true, 00:10:14.970 "unmap": true, 00:10:14.970 "flush": true, 00:10:14.971 "reset": true, 00:10:14.971 "nvme_admin": false, 00:10:14.971 "nvme_io": false, 00:10:14.971 "nvme_io_md": false, 00:10:14.971 "write_zeroes": true, 00:10:14.971 "zcopy": true, 00:10:14.971 "get_zone_info": false, 00:10:14.971 "zone_management": false, 00:10:14.971 "zone_append": false, 00:10:14.971 "compare": false, 00:10:14.971 "compare_and_write": false, 00:10:14.971 "abort": true, 00:10:14.971 "seek_hole": false, 00:10:14.971 "seek_data": false, 00:10:14.971 "copy": true, 00:10:14.971 "nvme_iov_md": false 00:10:14.971 }, 00:10:14.971 "memory_domains": [ 00:10:14.971 { 00:10:14.971 "dma_device_id": "system", 00:10:14.971 "dma_device_type": 1 00:10:14.971 }, 00:10:14.971 { 00:10:14.971 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:14.971 "dma_device_type": 2 00:10:14.971 } 00:10:14.971 ], 00:10:14.971 "driver_specific": {} 00:10:14.971 } 00:10:14.971 ] 00:10:14.971 13:20:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.971 13:20:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:14.971 13:20:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:14.971 13:20:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:14.971 13:20:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:14.971 13:20:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.971 13:20:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.971 BaseBdev4 00:10:14.971 13:20:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.971 13:20:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:10:14.971 13:20:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:14.971 13:20:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:14.971 13:20:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:14.971 13:20:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:14.971 13:20:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:14.971 13:20:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:14.971 13:20:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.971 13:20:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.971 13:20:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.971 13:20:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:14.971 13:20:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.971 13:20:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.971 [ 00:10:14.971 { 00:10:14.971 "name": "BaseBdev4", 00:10:14.971 "aliases": [ 00:10:14.971 "4ea94cfb-27d7-469a-a5b4-65beffe98534" 00:10:14.971 ], 00:10:14.971 "product_name": "Malloc disk", 00:10:14.971 "block_size": 512, 00:10:14.971 "num_blocks": 65536, 00:10:14.971 "uuid": "4ea94cfb-27d7-469a-a5b4-65beffe98534", 00:10:14.971 "assigned_rate_limits": { 00:10:14.971 "rw_ios_per_sec": 0, 00:10:14.971 "rw_mbytes_per_sec": 0, 00:10:14.971 "r_mbytes_per_sec": 0, 00:10:14.971 "w_mbytes_per_sec": 0 00:10:14.971 }, 00:10:14.971 "claimed": false, 00:10:14.971 "zoned": false, 00:10:14.971 "supported_io_types": { 00:10:14.971 "read": true, 00:10:14.971 "write": true, 00:10:14.971 "unmap": true, 00:10:14.971 "flush": true, 00:10:14.971 "reset": true, 00:10:14.971 "nvme_admin": false, 00:10:14.971 "nvme_io": false, 00:10:14.971 "nvme_io_md": false, 00:10:14.971 "write_zeroes": true, 00:10:14.971 "zcopy": true, 00:10:14.971 "get_zone_info": false, 00:10:14.971 "zone_management": false, 00:10:14.971 "zone_append": false, 00:10:14.971 "compare": false, 00:10:14.971 "compare_and_write": false, 00:10:14.971 "abort": true, 00:10:14.971 "seek_hole": false, 00:10:14.971 "seek_data": false, 00:10:14.971 "copy": true, 00:10:14.971 "nvme_iov_md": false 00:10:14.971 }, 00:10:14.971 "memory_domains": [ 00:10:14.971 { 00:10:14.971 "dma_device_id": "system", 00:10:14.971 "dma_device_type": 1 00:10:14.971 }, 00:10:14.971 { 00:10:14.971 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:14.971 "dma_device_type": 2 00:10:14.971 } 00:10:14.971 ], 00:10:14.971 "driver_specific": {} 00:10:14.971 } 00:10:14.971 ] 00:10:14.971 13:20:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.971 13:20:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:14.971 13:20:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:14.971 13:20:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:14.971 13:20:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:14.971 13:20:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.971 13:20:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.971 [2024-11-17 13:20:04.145901] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:14.971 [2024-11-17 13:20:04.146024] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:14.971 [2024-11-17 13:20:04.146066] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:14.971 [2024-11-17 13:20:04.147862] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:14.971 [2024-11-17 13:20:04.147966] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:14.971 13:20:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.971 13:20:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:14.971 13:20:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:14.971 13:20:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:14.971 13:20:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:14.971 13:20:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:14.971 13:20:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:14.971 13:20:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:14.971 13:20:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:14.971 13:20:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:14.971 13:20:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:14.971 13:20:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:14.971 13:20:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:14.971 13:20:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.971 13:20:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.971 13:20:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.231 13:20:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:15.231 "name": "Existed_Raid", 00:10:15.231 "uuid": "c6bd443d-c84f-4d2c-afca-903bdd768e94", 00:10:15.231 "strip_size_kb": 64, 00:10:15.231 "state": "configuring", 00:10:15.231 "raid_level": "raid0", 00:10:15.231 "superblock": true, 00:10:15.231 "num_base_bdevs": 4, 00:10:15.231 "num_base_bdevs_discovered": 3, 00:10:15.231 "num_base_bdevs_operational": 4, 00:10:15.231 "base_bdevs_list": [ 00:10:15.231 { 00:10:15.231 "name": "BaseBdev1", 00:10:15.231 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:15.231 "is_configured": false, 00:10:15.231 "data_offset": 0, 00:10:15.231 "data_size": 0 00:10:15.231 }, 00:10:15.231 { 00:10:15.231 "name": "BaseBdev2", 00:10:15.231 "uuid": "2b8a2c2f-c264-4b7f-bbce-517e3fd475c4", 00:10:15.231 "is_configured": true, 00:10:15.231 "data_offset": 2048, 00:10:15.231 "data_size": 63488 00:10:15.231 }, 00:10:15.231 { 00:10:15.231 "name": "BaseBdev3", 00:10:15.231 "uuid": "37056e1e-34e8-4d11-8ad1-501bf5c8bd28", 00:10:15.231 "is_configured": true, 00:10:15.231 "data_offset": 2048, 00:10:15.231 "data_size": 63488 00:10:15.231 }, 00:10:15.231 { 00:10:15.231 "name": "BaseBdev4", 00:10:15.231 "uuid": "4ea94cfb-27d7-469a-a5b4-65beffe98534", 00:10:15.231 "is_configured": true, 00:10:15.231 "data_offset": 2048, 00:10:15.231 "data_size": 63488 00:10:15.231 } 00:10:15.231 ] 00:10:15.231 }' 00:10:15.231 13:20:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:15.231 13:20:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.492 13:20:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:15.492 13:20:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.492 13:20:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.492 [2024-11-17 13:20:04.577165] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:15.492 13:20:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.492 13:20:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:15.492 13:20:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:15.492 13:20:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:15.492 13:20:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:15.492 13:20:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:15.492 13:20:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:15.492 13:20:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:15.492 13:20:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:15.492 13:20:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:15.492 13:20:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:15.492 13:20:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:15.492 13:20:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:15.492 13:20:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.492 13:20:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.492 13:20:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.492 13:20:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:15.492 "name": "Existed_Raid", 00:10:15.492 "uuid": "c6bd443d-c84f-4d2c-afca-903bdd768e94", 00:10:15.492 "strip_size_kb": 64, 00:10:15.492 "state": "configuring", 00:10:15.492 "raid_level": "raid0", 00:10:15.492 "superblock": true, 00:10:15.492 "num_base_bdevs": 4, 00:10:15.492 "num_base_bdevs_discovered": 2, 00:10:15.492 "num_base_bdevs_operational": 4, 00:10:15.492 "base_bdevs_list": [ 00:10:15.492 { 00:10:15.492 "name": "BaseBdev1", 00:10:15.492 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:15.492 "is_configured": false, 00:10:15.492 "data_offset": 0, 00:10:15.492 "data_size": 0 00:10:15.492 }, 00:10:15.492 { 00:10:15.492 "name": null, 00:10:15.492 "uuid": "2b8a2c2f-c264-4b7f-bbce-517e3fd475c4", 00:10:15.492 "is_configured": false, 00:10:15.492 "data_offset": 0, 00:10:15.492 "data_size": 63488 00:10:15.492 }, 00:10:15.492 { 00:10:15.492 "name": "BaseBdev3", 00:10:15.492 "uuid": "37056e1e-34e8-4d11-8ad1-501bf5c8bd28", 00:10:15.492 "is_configured": true, 00:10:15.492 "data_offset": 2048, 00:10:15.492 "data_size": 63488 00:10:15.492 }, 00:10:15.492 { 00:10:15.492 "name": "BaseBdev4", 00:10:15.492 "uuid": "4ea94cfb-27d7-469a-a5b4-65beffe98534", 00:10:15.492 "is_configured": true, 00:10:15.492 "data_offset": 2048, 00:10:15.492 "data_size": 63488 00:10:15.492 } 00:10:15.492 ] 00:10:15.492 }' 00:10:15.492 13:20:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:15.492 13:20:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.061 13:20:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.061 13:20:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.061 13:20:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:16.061 13:20:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.061 13:20:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.061 13:20:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:16.061 13:20:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:16.061 13:20:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.061 13:20:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.061 [2024-11-17 13:20:05.116463] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:16.061 BaseBdev1 00:10:16.061 13:20:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.061 13:20:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:16.061 13:20:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:16.061 13:20:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:16.061 13:20:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:16.061 13:20:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:16.061 13:20:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:16.061 13:20:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:16.061 13:20:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.061 13:20:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.061 13:20:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.061 13:20:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:16.061 13:20:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.061 13:20:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.061 [ 00:10:16.061 { 00:10:16.061 "name": "BaseBdev1", 00:10:16.061 "aliases": [ 00:10:16.061 "84b57290-017e-4e39-8697-adbf81bcff4d" 00:10:16.061 ], 00:10:16.061 "product_name": "Malloc disk", 00:10:16.061 "block_size": 512, 00:10:16.061 "num_blocks": 65536, 00:10:16.061 "uuid": "84b57290-017e-4e39-8697-adbf81bcff4d", 00:10:16.061 "assigned_rate_limits": { 00:10:16.061 "rw_ios_per_sec": 0, 00:10:16.061 "rw_mbytes_per_sec": 0, 00:10:16.061 "r_mbytes_per_sec": 0, 00:10:16.061 "w_mbytes_per_sec": 0 00:10:16.061 }, 00:10:16.061 "claimed": true, 00:10:16.061 "claim_type": "exclusive_write", 00:10:16.061 "zoned": false, 00:10:16.061 "supported_io_types": { 00:10:16.061 "read": true, 00:10:16.061 "write": true, 00:10:16.061 "unmap": true, 00:10:16.061 "flush": true, 00:10:16.061 "reset": true, 00:10:16.061 "nvme_admin": false, 00:10:16.061 "nvme_io": false, 00:10:16.062 "nvme_io_md": false, 00:10:16.062 "write_zeroes": true, 00:10:16.062 "zcopy": true, 00:10:16.062 "get_zone_info": false, 00:10:16.062 "zone_management": false, 00:10:16.062 "zone_append": false, 00:10:16.062 "compare": false, 00:10:16.062 "compare_and_write": false, 00:10:16.062 "abort": true, 00:10:16.062 "seek_hole": false, 00:10:16.062 "seek_data": false, 00:10:16.062 "copy": true, 00:10:16.062 "nvme_iov_md": false 00:10:16.062 }, 00:10:16.062 "memory_domains": [ 00:10:16.062 { 00:10:16.062 "dma_device_id": "system", 00:10:16.062 "dma_device_type": 1 00:10:16.062 }, 00:10:16.062 { 00:10:16.062 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:16.062 "dma_device_type": 2 00:10:16.062 } 00:10:16.062 ], 00:10:16.062 "driver_specific": {} 00:10:16.062 } 00:10:16.062 ] 00:10:16.062 13:20:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.062 13:20:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:16.062 13:20:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:16.062 13:20:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:16.062 13:20:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:16.062 13:20:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:16.062 13:20:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:16.062 13:20:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:16.062 13:20:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:16.062 13:20:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:16.062 13:20:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:16.062 13:20:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:16.062 13:20:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.062 13:20:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.062 13:20:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:16.062 13:20:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.062 13:20:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.062 13:20:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:16.062 "name": "Existed_Raid", 00:10:16.062 "uuid": "c6bd443d-c84f-4d2c-afca-903bdd768e94", 00:10:16.062 "strip_size_kb": 64, 00:10:16.062 "state": "configuring", 00:10:16.062 "raid_level": "raid0", 00:10:16.062 "superblock": true, 00:10:16.062 "num_base_bdevs": 4, 00:10:16.062 "num_base_bdevs_discovered": 3, 00:10:16.062 "num_base_bdevs_operational": 4, 00:10:16.062 "base_bdevs_list": [ 00:10:16.062 { 00:10:16.062 "name": "BaseBdev1", 00:10:16.062 "uuid": "84b57290-017e-4e39-8697-adbf81bcff4d", 00:10:16.062 "is_configured": true, 00:10:16.062 "data_offset": 2048, 00:10:16.062 "data_size": 63488 00:10:16.062 }, 00:10:16.062 { 00:10:16.062 "name": null, 00:10:16.062 "uuid": "2b8a2c2f-c264-4b7f-bbce-517e3fd475c4", 00:10:16.062 "is_configured": false, 00:10:16.062 "data_offset": 0, 00:10:16.062 "data_size": 63488 00:10:16.062 }, 00:10:16.062 { 00:10:16.062 "name": "BaseBdev3", 00:10:16.062 "uuid": "37056e1e-34e8-4d11-8ad1-501bf5c8bd28", 00:10:16.062 "is_configured": true, 00:10:16.062 "data_offset": 2048, 00:10:16.062 "data_size": 63488 00:10:16.062 }, 00:10:16.062 { 00:10:16.062 "name": "BaseBdev4", 00:10:16.062 "uuid": "4ea94cfb-27d7-469a-a5b4-65beffe98534", 00:10:16.062 "is_configured": true, 00:10:16.062 "data_offset": 2048, 00:10:16.062 "data_size": 63488 00:10:16.062 } 00:10:16.062 ] 00:10:16.062 }' 00:10:16.062 13:20:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:16.062 13:20:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.631 13:20:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:16.631 13:20:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.631 13:20:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.631 13:20:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.631 13:20:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.631 13:20:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:16.631 13:20:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:16.631 13:20:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.631 13:20:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.631 [2024-11-17 13:20:05.615701] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:16.631 13:20:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.631 13:20:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:16.631 13:20:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:16.631 13:20:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:16.631 13:20:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:16.631 13:20:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:16.631 13:20:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:16.631 13:20:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:16.631 13:20:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:16.631 13:20:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:16.631 13:20:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:16.631 13:20:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.631 13:20:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:16.631 13:20:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.631 13:20:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.631 13:20:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.631 13:20:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:16.631 "name": "Existed_Raid", 00:10:16.631 "uuid": "c6bd443d-c84f-4d2c-afca-903bdd768e94", 00:10:16.631 "strip_size_kb": 64, 00:10:16.631 "state": "configuring", 00:10:16.631 "raid_level": "raid0", 00:10:16.631 "superblock": true, 00:10:16.631 "num_base_bdevs": 4, 00:10:16.631 "num_base_bdevs_discovered": 2, 00:10:16.631 "num_base_bdevs_operational": 4, 00:10:16.631 "base_bdevs_list": [ 00:10:16.631 { 00:10:16.631 "name": "BaseBdev1", 00:10:16.631 "uuid": "84b57290-017e-4e39-8697-adbf81bcff4d", 00:10:16.631 "is_configured": true, 00:10:16.631 "data_offset": 2048, 00:10:16.631 "data_size": 63488 00:10:16.631 }, 00:10:16.631 { 00:10:16.631 "name": null, 00:10:16.631 "uuid": "2b8a2c2f-c264-4b7f-bbce-517e3fd475c4", 00:10:16.631 "is_configured": false, 00:10:16.631 "data_offset": 0, 00:10:16.632 "data_size": 63488 00:10:16.632 }, 00:10:16.632 { 00:10:16.632 "name": null, 00:10:16.632 "uuid": "37056e1e-34e8-4d11-8ad1-501bf5c8bd28", 00:10:16.632 "is_configured": false, 00:10:16.632 "data_offset": 0, 00:10:16.632 "data_size": 63488 00:10:16.632 }, 00:10:16.632 { 00:10:16.632 "name": "BaseBdev4", 00:10:16.632 "uuid": "4ea94cfb-27d7-469a-a5b4-65beffe98534", 00:10:16.632 "is_configured": true, 00:10:16.632 "data_offset": 2048, 00:10:16.632 "data_size": 63488 00:10:16.632 } 00:10:16.632 ] 00:10:16.632 }' 00:10:16.632 13:20:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:16.632 13:20:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.891 13:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.891 13:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:16.891 13:20:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.891 13:20:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.891 13:20:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.151 13:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:17.151 13:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:17.151 13:20:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.151 13:20:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.151 [2024-11-17 13:20:06.134890] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:17.151 13:20:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.151 13:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:17.151 13:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:17.151 13:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:17.151 13:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:17.151 13:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:17.151 13:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:17.151 13:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:17.151 13:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:17.151 13:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:17.151 13:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:17.151 13:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.151 13:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:17.151 13:20:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.151 13:20:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.151 13:20:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.151 13:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:17.151 "name": "Existed_Raid", 00:10:17.151 "uuid": "c6bd443d-c84f-4d2c-afca-903bdd768e94", 00:10:17.151 "strip_size_kb": 64, 00:10:17.151 "state": "configuring", 00:10:17.151 "raid_level": "raid0", 00:10:17.151 "superblock": true, 00:10:17.151 "num_base_bdevs": 4, 00:10:17.151 "num_base_bdevs_discovered": 3, 00:10:17.151 "num_base_bdevs_operational": 4, 00:10:17.151 "base_bdevs_list": [ 00:10:17.151 { 00:10:17.151 "name": "BaseBdev1", 00:10:17.151 "uuid": "84b57290-017e-4e39-8697-adbf81bcff4d", 00:10:17.151 "is_configured": true, 00:10:17.151 "data_offset": 2048, 00:10:17.151 "data_size": 63488 00:10:17.151 }, 00:10:17.151 { 00:10:17.151 "name": null, 00:10:17.151 "uuid": "2b8a2c2f-c264-4b7f-bbce-517e3fd475c4", 00:10:17.151 "is_configured": false, 00:10:17.151 "data_offset": 0, 00:10:17.151 "data_size": 63488 00:10:17.151 }, 00:10:17.151 { 00:10:17.151 "name": "BaseBdev3", 00:10:17.151 "uuid": "37056e1e-34e8-4d11-8ad1-501bf5c8bd28", 00:10:17.151 "is_configured": true, 00:10:17.151 "data_offset": 2048, 00:10:17.151 "data_size": 63488 00:10:17.151 }, 00:10:17.151 { 00:10:17.151 "name": "BaseBdev4", 00:10:17.151 "uuid": "4ea94cfb-27d7-469a-a5b4-65beffe98534", 00:10:17.151 "is_configured": true, 00:10:17.151 "data_offset": 2048, 00:10:17.151 "data_size": 63488 00:10:17.151 } 00:10:17.151 ] 00:10:17.151 }' 00:10:17.151 13:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:17.151 13:20:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.410 13:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:17.410 13:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.410 13:20:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.410 13:20:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.410 13:20:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.669 13:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:17.669 13:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:17.669 13:20:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.669 13:20:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.669 [2024-11-17 13:20:06.650048] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:17.669 13:20:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.669 13:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:17.669 13:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:17.669 13:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:17.669 13:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:17.669 13:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:17.669 13:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:17.669 13:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:17.669 13:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:17.669 13:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:17.669 13:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:17.669 13:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:17.669 13:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.669 13:20:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.669 13:20:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.669 13:20:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.669 13:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:17.669 "name": "Existed_Raid", 00:10:17.669 "uuid": "c6bd443d-c84f-4d2c-afca-903bdd768e94", 00:10:17.669 "strip_size_kb": 64, 00:10:17.669 "state": "configuring", 00:10:17.669 "raid_level": "raid0", 00:10:17.669 "superblock": true, 00:10:17.669 "num_base_bdevs": 4, 00:10:17.669 "num_base_bdevs_discovered": 2, 00:10:17.669 "num_base_bdevs_operational": 4, 00:10:17.669 "base_bdevs_list": [ 00:10:17.669 { 00:10:17.669 "name": null, 00:10:17.669 "uuid": "84b57290-017e-4e39-8697-adbf81bcff4d", 00:10:17.669 "is_configured": false, 00:10:17.669 "data_offset": 0, 00:10:17.669 "data_size": 63488 00:10:17.669 }, 00:10:17.669 { 00:10:17.669 "name": null, 00:10:17.669 "uuid": "2b8a2c2f-c264-4b7f-bbce-517e3fd475c4", 00:10:17.669 "is_configured": false, 00:10:17.669 "data_offset": 0, 00:10:17.669 "data_size": 63488 00:10:17.669 }, 00:10:17.669 { 00:10:17.669 "name": "BaseBdev3", 00:10:17.669 "uuid": "37056e1e-34e8-4d11-8ad1-501bf5c8bd28", 00:10:17.669 "is_configured": true, 00:10:17.669 "data_offset": 2048, 00:10:17.669 "data_size": 63488 00:10:17.669 }, 00:10:17.669 { 00:10:17.669 "name": "BaseBdev4", 00:10:17.669 "uuid": "4ea94cfb-27d7-469a-a5b4-65beffe98534", 00:10:17.669 "is_configured": true, 00:10:17.669 "data_offset": 2048, 00:10:17.669 "data_size": 63488 00:10:17.669 } 00:10:17.669 ] 00:10:17.669 }' 00:10:17.669 13:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:17.669 13:20:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.238 13:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.238 13:20:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.238 13:20:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.238 13:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:18.238 13:20:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.238 13:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:18.238 13:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:18.238 13:20:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.238 13:20:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.238 [2024-11-17 13:20:07.256438] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:18.238 13:20:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.238 13:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:18.238 13:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:18.238 13:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:18.238 13:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:18.238 13:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:18.238 13:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:18.238 13:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:18.238 13:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:18.238 13:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:18.238 13:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:18.238 13:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.238 13:20:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.238 13:20:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.238 13:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:18.238 13:20:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.238 13:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:18.238 "name": "Existed_Raid", 00:10:18.238 "uuid": "c6bd443d-c84f-4d2c-afca-903bdd768e94", 00:10:18.238 "strip_size_kb": 64, 00:10:18.238 "state": "configuring", 00:10:18.238 "raid_level": "raid0", 00:10:18.238 "superblock": true, 00:10:18.238 "num_base_bdevs": 4, 00:10:18.238 "num_base_bdevs_discovered": 3, 00:10:18.239 "num_base_bdevs_operational": 4, 00:10:18.239 "base_bdevs_list": [ 00:10:18.239 { 00:10:18.239 "name": null, 00:10:18.239 "uuid": "84b57290-017e-4e39-8697-adbf81bcff4d", 00:10:18.239 "is_configured": false, 00:10:18.239 "data_offset": 0, 00:10:18.239 "data_size": 63488 00:10:18.239 }, 00:10:18.239 { 00:10:18.239 "name": "BaseBdev2", 00:10:18.239 "uuid": "2b8a2c2f-c264-4b7f-bbce-517e3fd475c4", 00:10:18.239 "is_configured": true, 00:10:18.239 "data_offset": 2048, 00:10:18.239 "data_size": 63488 00:10:18.239 }, 00:10:18.239 { 00:10:18.239 "name": "BaseBdev3", 00:10:18.239 "uuid": "37056e1e-34e8-4d11-8ad1-501bf5c8bd28", 00:10:18.239 "is_configured": true, 00:10:18.239 "data_offset": 2048, 00:10:18.239 "data_size": 63488 00:10:18.239 }, 00:10:18.239 { 00:10:18.239 "name": "BaseBdev4", 00:10:18.239 "uuid": "4ea94cfb-27d7-469a-a5b4-65beffe98534", 00:10:18.239 "is_configured": true, 00:10:18.239 "data_offset": 2048, 00:10:18.239 "data_size": 63488 00:10:18.239 } 00:10:18.239 ] 00:10:18.239 }' 00:10:18.239 13:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:18.239 13:20:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.809 13:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.809 13:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:18.809 13:20:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.809 13:20:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.809 13:20:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.809 13:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:18.809 13:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.809 13:20:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.809 13:20:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.809 13:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:18.809 13:20:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.809 13:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 84b57290-017e-4e39-8697-adbf81bcff4d 00:10:18.809 13:20:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.809 13:20:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.809 [2024-11-17 13:20:07.866835] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:18.809 [2024-11-17 13:20:07.867164] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:18.809 [2024-11-17 13:20:07.867225] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:18.809 [2024-11-17 13:20:07.867516] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:10:18.809 [2024-11-17 13:20:07.867709] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:18.809 [2024-11-17 13:20:07.867754] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:18.809 NewBaseBdev 00:10:18.809 [2024-11-17 13:20:07.867948] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:18.809 13:20:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.809 13:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:18.809 13:20:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:10:18.809 13:20:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:18.809 13:20:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:18.809 13:20:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:18.809 13:20:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:18.809 13:20:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:18.809 13:20:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.809 13:20:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.809 13:20:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.809 13:20:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:18.809 13:20:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.809 13:20:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.809 [ 00:10:18.809 { 00:10:18.809 "name": "NewBaseBdev", 00:10:18.809 "aliases": [ 00:10:18.809 "84b57290-017e-4e39-8697-adbf81bcff4d" 00:10:18.809 ], 00:10:18.809 "product_name": "Malloc disk", 00:10:18.809 "block_size": 512, 00:10:18.809 "num_blocks": 65536, 00:10:18.809 "uuid": "84b57290-017e-4e39-8697-adbf81bcff4d", 00:10:18.809 "assigned_rate_limits": { 00:10:18.809 "rw_ios_per_sec": 0, 00:10:18.809 "rw_mbytes_per_sec": 0, 00:10:18.809 "r_mbytes_per_sec": 0, 00:10:18.809 "w_mbytes_per_sec": 0 00:10:18.809 }, 00:10:18.809 "claimed": true, 00:10:18.809 "claim_type": "exclusive_write", 00:10:18.809 "zoned": false, 00:10:18.809 "supported_io_types": { 00:10:18.809 "read": true, 00:10:18.809 "write": true, 00:10:18.809 "unmap": true, 00:10:18.809 "flush": true, 00:10:18.809 "reset": true, 00:10:18.809 "nvme_admin": false, 00:10:18.809 "nvme_io": false, 00:10:18.809 "nvme_io_md": false, 00:10:18.809 "write_zeroes": true, 00:10:18.809 "zcopy": true, 00:10:18.809 "get_zone_info": false, 00:10:18.809 "zone_management": false, 00:10:18.809 "zone_append": false, 00:10:18.809 "compare": false, 00:10:18.809 "compare_and_write": false, 00:10:18.809 "abort": true, 00:10:18.809 "seek_hole": false, 00:10:18.809 "seek_data": false, 00:10:18.809 "copy": true, 00:10:18.809 "nvme_iov_md": false 00:10:18.809 }, 00:10:18.809 "memory_domains": [ 00:10:18.809 { 00:10:18.809 "dma_device_id": "system", 00:10:18.809 "dma_device_type": 1 00:10:18.809 }, 00:10:18.809 { 00:10:18.809 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:18.809 "dma_device_type": 2 00:10:18.809 } 00:10:18.809 ], 00:10:18.809 "driver_specific": {} 00:10:18.809 } 00:10:18.809 ] 00:10:18.809 13:20:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.809 13:20:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:18.809 13:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:10:18.809 13:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:18.809 13:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:18.809 13:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:18.809 13:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:18.809 13:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:18.809 13:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:18.809 13:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:18.809 13:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:18.809 13:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:18.809 13:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.809 13:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:18.809 13:20:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.809 13:20:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.809 13:20:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.809 13:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:18.809 "name": "Existed_Raid", 00:10:18.809 "uuid": "c6bd443d-c84f-4d2c-afca-903bdd768e94", 00:10:18.809 "strip_size_kb": 64, 00:10:18.809 "state": "online", 00:10:18.809 "raid_level": "raid0", 00:10:18.809 "superblock": true, 00:10:18.809 "num_base_bdevs": 4, 00:10:18.809 "num_base_bdevs_discovered": 4, 00:10:18.809 "num_base_bdevs_operational": 4, 00:10:18.809 "base_bdevs_list": [ 00:10:18.809 { 00:10:18.809 "name": "NewBaseBdev", 00:10:18.809 "uuid": "84b57290-017e-4e39-8697-adbf81bcff4d", 00:10:18.809 "is_configured": true, 00:10:18.809 "data_offset": 2048, 00:10:18.809 "data_size": 63488 00:10:18.809 }, 00:10:18.809 { 00:10:18.809 "name": "BaseBdev2", 00:10:18.809 "uuid": "2b8a2c2f-c264-4b7f-bbce-517e3fd475c4", 00:10:18.809 "is_configured": true, 00:10:18.809 "data_offset": 2048, 00:10:18.810 "data_size": 63488 00:10:18.810 }, 00:10:18.810 { 00:10:18.810 "name": "BaseBdev3", 00:10:18.810 "uuid": "37056e1e-34e8-4d11-8ad1-501bf5c8bd28", 00:10:18.810 "is_configured": true, 00:10:18.810 "data_offset": 2048, 00:10:18.810 "data_size": 63488 00:10:18.810 }, 00:10:18.810 { 00:10:18.810 "name": "BaseBdev4", 00:10:18.810 "uuid": "4ea94cfb-27d7-469a-a5b4-65beffe98534", 00:10:18.810 "is_configured": true, 00:10:18.810 "data_offset": 2048, 00:10:18.810 "data_size": 63488 00:10:18.810 } 00:10:18.810 ] 00:10:18.810 }' 00:10:18.810 13:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:18.810 13:20:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.378 13:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:19.378 13:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:19.378 13:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:19.378 13:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:19.378 13:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:19.378 13:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:19.378 13:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:19.378 13:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:19.378 13:20:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.378 13:20:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.378 [2024-11-17 13:20:08.382367] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:19.378 13:20:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.378 13:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:19.378 "name": "Existed_Raid", 00:10:19.378 "aliases": [ 00:10:19.378 "c6bd443d-c84f-4d2c-afca-903bdd768e94" 00:10:19.378 ], 00:10:19.378 "product_name": "Raid Volume", 00:10:19.378 "block_size": 512, 00:10:19.378 "num_blocks": 253952, 00:10:19.378 "uuid": "c6bd443d-c84f-4d2c-afca-903bdd768e94", 00:10:19.378 "assigned_rate_limits": { 00:10:19.378 "rw_ios_per_sec": 0, 00:10:19.378 "rw_mbytes_per_sec": 0, 00:10:19.378 "r_mbytes_per_sec": 0, 00:10:19.378 "w_mbytes_per_sec": 0 00:10:19.378 }, 00:10:19.378 "claimed": false, 00:10:19.378 "zoned": false, 00:10:19.378 "supported_io_types": { 00:10:19.378 "read": true, 00:10:19.378 "write": true, 00:10:19.378 "unmap": true, 00:10:19.378 "flush": true, 00:10:19.378 "reset": true, 00:10:19.378 "nvme_admin": false, 00:10:19.378 "nvme_io": false, 00:10:19.378 "nvme_io_md": false, 00:10:19.378 "write_zeroes": true, 00:10:19.378 "zcopy": false, 00:10:19.378 "get_zone_info": false, 00:10:19.378 "zone_management": false, 00:10:19.378 "zone_append": false, 00:10:19.378 "compare": false, 00:10:19.378 "compare_and_write": false, 00:10:19.378 "abort": false, 00:10:19.378 "seek_hole": false, 00:10:19.378 "seek_data": false, 00:10:19.378 "copy": false, 00:10:19.378 "nvme_iov_md": false 00:10:19.378 }, 00:10:19.378 "memory_domains": [ 00:10:19.378 { 00:10:19.378 "dma_device_id": "system", 00:10:19.378 "dma_device_type": 1 00:10:19.378 }, 00:10:19.378 { 00:10:19.378 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:19.378 "dma_device_type": 2 00:10:19.378 }, 00:10:19.378 { 00:10:19.378 "dma_device_id": "system", 00:10:19.378 "dma_device_type": 1 00:10:19.378 }, 00:10:19.378 { 00:10:19.378 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:19.378 "dma_device_type": 2 00:10:19.378 }, 00:10:19.378 { 00:10:19.378 "dma_device_id": "system", 00:10:19.378 "dma_device_type": 1 00:10:19.378 }, 00:10:19.378 { 00:10:19.378 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:19.378 "dma_device_type": 2 00:10:19.378 }, 00:10:19.378 { 00:10:19.378 "dma_device_id": "system", 00:10:19.378 "dma_device_type": 1 00:10:19.378 }, 00:10:19.378 { 00:10:19.378 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:19.378 "dma_device_type": 2 00:10:19.378 } 00:10:19.378 ], 00:10:19.378 "driver_specific": { 00:10:19.378 "raid": { 00:10:19.378 "uuid": "c6bd443d-c84f-4d2c-afca-903bdd768e94", 00:10:19.378 "strip_size_kb": 64, 00:10:19.378 "state": "online", 00:10:19.378 "raid_level": "raid0", 00:10:19.378 "superblock": true, 00:10:19.378 "num_base_bdevs": 4, 00:10:19.378 "num_base_bdevs_discovered": 4, 00:10:19.378 "num_base_bdevs_operational": 4, 00:10:19.378 "base_bdevs_list": [ 00:10:19.378 { 00:10:19.378 "name": "NewBaseBdev", 00:10:19.378 "uuid": "84b57290-017e-4e39-8697-adbf81bcff4d", 00:10:19.378 "is_configured": true, 00:10:19.378 "data_offset": 2048, 00:10:19.378 "data_size": 63488 00:10:19.378 }, 00:10:19.378 { 00:10:19.378 "name": "BaseBdev2", 00:10:19.378 "uuid": "2b8a2c2f-c264-4b7f-bbce-517e3fd475c4", 00:10:19.378 "is_configured": true, 00:10:19.378 "data_offset": 2048, 00:10:19.378 "data_size": 63488 00:10:19.378 }, 00:10:19.378 { 00:10:19.378 "name": "BaseBdev3", 00:10:19.378 "uuid": "37056e1e-34e8-4d11-8ad1-501bf5c8bd28", 00:10:19.378 "is_configured": true, 00:10:19.378 "data_offset": 2048, 00:10:19.378 "data_size": 63488 00:10:19.378 }, 00:10:19.378 { 00:10:19.378 "name": "BaseBdev4", 00:10:19.378 "uuid": "4ea94cfb-27d7-469a-a5b4-65beffe98534", 00:10:19.378 "is_configured": true, 00:10:19.378 "data_offset": 2048, 00:10:19.378 "data_size": 63488 00:10:19.378 } 00:10:19.378 ] 00:10:19.378 } 00:10:19.378 } 00:10:19.378 }' 00:10:19.378 13:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:19.378 13:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:19.378 BaseBdev2 00:10:19.378 BaseBdev3 00:10:19.378 BaseBdev4' 00:10:19.378 13:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:19.378 13:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:19.378 13:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:19.378 13:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:19.378 13:20:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.378 13:20:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.378 13:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:19.378 13:20:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.378 13:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:19.378 13:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:19.378 13:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:19.378 13:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:19.378 13:20:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.378 13:20:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.378 13:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:19.378 13:20:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.378 13:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:19.378 13:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:19.378 13:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:19.378 13:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:19.378 13:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:19.378 13:20:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.378 13:20:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.638 13:20:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.638 13:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:19.638 13:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:19.638 13:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:19.638 13:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:19.638 13:20:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.638 13:20:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.638 13:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:19.638 13:20:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.638 13:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:19.638 13:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:19.638 13:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:19.638 13:20:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.638 13:20:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.638 [2024-11-17 13:20:08.673479] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:19.638 [2024-11-17 13:20:08.673509] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:19.638 [2024-11-17 13:20:08.673580] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:19.638 [2024-11-17 13:20:08.673649] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:19.638 [2024-11-17 13:20:08.673659] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:19.638 13:20:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.638 13:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 69971 00:10:19.638 13:20:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 69971 ']' 00:10:19.638 13:20:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 69971 00:10:19.638 13:20:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:10:19.638 13:20:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:19.638 13:20:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69971 00:10:19.638 13:20:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:19.638 13:20:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:19.638 13:20:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69971' 00:10:19.638 killing process with pid 69971 00:10:19.638 13:20:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 69971 00:10:19.639 [2024-11-17 13:20:08.714059] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:19.639 13:20:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 69971 00:10:19.898 [2024-11-17 13:20:09.095243] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:21.285 13:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:10:21.285 00:10:21.285 real 0m11.716s 00:10:21.285 user 0m18.709s 00:10:21.285 sys 0m2.115s 00:10:21.285 13:20:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:21.285 13:20:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:21.285 ************************************ 00:10:21.285 END TEST raid_state_function_test_sb 00:10:21.285 ************************************ 00:10:21.285 13:20:10 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:10:21.285 13:20:10 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:21.285 13:20:10 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:21.285 13:20:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:21.285 ************************************ 00:10:21.285 START TEST raid_superblock_test 00:10:21.285 ************************************ 00:10:21.285 13:20:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 4 00:10:21.285 13:20:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:10:21.285 13:20:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:10:21.285 13:20:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:10:21.285 13:20:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:10:21.285 13:20:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:10:21.285 13:20:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:10:21.285 13:20:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:10:21.285 13:20:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:10:21.285 13:20:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:10:21.285 13:20:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:10:21.285 13:20:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:10:21.285 13:20:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:10:21.285 13:20:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:10:21.285 13:20:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:10:21.285 13:20:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:10:21.285 13:20:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:10:21.285 13:20:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=70643 00:10:21.285 13:20:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:10:21.285 13:20:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 70643 00:10:21.285 13:20:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 70643 ']' 00:10:21.285 13:20:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:21.285 13:20:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:21.285 13:20:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:21.285 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:21.285 13:20:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:21.285 13:20:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.285 [2024-11-17 13:20:10.324029] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:10:21.285 [2024-11-17 13:20:10.324244] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70643 ] 00:10:21.285 [2024-11-17 13:20:10.479671] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:21.546 [2024-11-17 13:20:10.588427] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:21.805 [2024-11-17 13:20:10.777879] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:21.805 [2024-11-17 13:20:10.778014] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:22.066 13:20:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:22.066 13:20:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:10:22.066 13:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:10:22.066 13:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:22.066 13:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:10:22.066 13:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:10:22.066 13:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:22.066 13:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:22.066 13:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:22.066 13:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:22.066 13:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:10:22.066 13:20:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.066 13:20:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.066 malloc1 00:10:22.066 13:20:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.066 13:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:22.066 13:20:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.066 13:20:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.066 [2024-11-17 13:20:11.196314] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:22.066 [2024-11-17 13:20:11.196472] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:22.066 [2024-11-17 13:20:11.196517] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:22.066 [2024-11-17 13:20:11.196528] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:22.066 [2024-11-17 13:20:11.198838] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:22.066 [2024-11-17 13:20:11.198875] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:22.066 pt1 00:10:22.066 13:20:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.066 13:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:22.066 13:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:22.066 13:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:10:22.066 13:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:10:22.066 13:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:22.066 13:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:22.066 13:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:22.066 13:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:22.066 13:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:10:22.066 13:20:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.066 13:20:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.066 malloc2 00:10:22.066 13:20:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.066 13:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:22.066 13:20:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.066 13:20:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.066 [2024-11-17 13:20:11.248654] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:22.066 [2024-11-17 13:20:11.248770] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:22.066 [2024-11-17 13:20:11.248816] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:22.066 [2024-11-17 13:20:11.248848] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:22.066 [2024-11-17 13:20:11.251003] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:22.066 [2024-11-17 13:20:11.251073] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:22.066 pt2 00:10:22.066 13:20:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.066 13:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:22.066 13:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:22.066 13:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:10:22.066 13:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:10:22.066 13:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:10:22.066 13:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:22.067 13:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:22.067 13:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:22.067 13:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:10:22.067 13:20:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.067 13:20:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.327 malloc3 00:10:22.327 13:20:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.327 13:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:22.327 13:20:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.327 13:20:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.327 [2024-11-17 13:20:11.318233] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:22.327 [2024-11-17 13:20:11.318345] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:22.327 [2024-11-17 13:20:11.318381] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:10:22.327 [2024-11-17 13:20:11.318409] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:22.327 [2024-11-17 13:20:11.320543] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:22.327 [2024-11-17 13:20:11.320612] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:22.327 pt3 00:10:22.327 13:20:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.327 13:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:22.327 13:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:22.327 13:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:10:22.327 13:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:10:22.328 13:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:10:22.328 13:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:22.328 13:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:22.328 13:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:22.328 13:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:10:22.328 13:20:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.328 13:20:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.328 malloc4 00:10:22.328 13:20:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.328 13:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:22.328 13:20:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.328 13:20:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.328 [2024-11-17 13:20:11.375726] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:22.328 [2024-11-17 13:20:11.375778] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:22.328 [2024-11-17 13:20:11.375793] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:22.328 [2024-11-17 13:20:11.375802] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:22.328 [2024-11-17 13:20:11.377859] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:22.328 [2024-11-17 13:20:11.377925] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:22.328 pt4 00:10:22.328 13:20:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.328 13:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:22.328 13:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:22.328 13:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:10:22.328 13:20:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.328 13:20:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.328 [2024-11-17 13:20:11.387736] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:22.328 [2024-11-17 13:20:11.389521] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:22.328 [2024-11-17 13:20:11.389580] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:22.328 [2024-11-17 13:20:11.389638] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:22.328 [2024-11-17 13:20:11.389808] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:10:22.328 [2024-11-17 13:20:11.389819] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:22.328 [2024-11-17 13:20:11.390049] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:22.328 [2024-11-17 13:20:11.390203] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:10:22.328 [2024-11-17 13:20:11.390234] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:10:22.328 [2024-11-17 13:20:11.390371] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:22.328 13:20:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.328 13:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:22.328 13:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:22.328 13:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:22.328 13:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:22.328 13:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:22.328 13:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:22.328 13:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:22.328 13:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:22.328 13:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:22.328 13:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:22.328 13:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:22.328 13:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:22.328 13:20:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.328 13:20:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.328 13:20:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.328 13:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:22.328 "name": "raid_bdev1", 00:10:22.328 "uuid": "bbfc3ef2-ae1e-4ec6-9516-bfa0351a61e4", 00:10:22.328 "strip_size_kb": 64, 00:10:22.328 "state": "online", 00:10:22.328 "raid_level": "raid0", 00:10:22.328 "superblock": true, 00:10:22.328 "num_base_bdevs": 4, 00:10:22.328 "num_base_bdevs_discovered": 4, 00:10:22.328 "num_base_bdevs_operational": 4, 00:10:22.328 "base_bdevs_list": [ 00:10:22.328 { 00:10:22.328 "name": "pt1", 00:10:22.328 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:22.328 "is_configured": true, 00:10:22.328 "data_offset": 2048, 00:10:22.328 "data_size": 63488 00:10:22.328 }, 00:10:22.328 { 00:10:22.328 "name": "pt2", 00:10:22.328 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:22.328 "is_configured": true, 00:10:22.328 "data_offset": 2048, 00:10:22.328 "data_size": 63488 00:10:22.328 }, 00:10:22.328 { 00:10:22.328 "name": "pt3", 00:10:22.328 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:22.328 "is_configured": true, 00:10:22.328 "data_offset": 2048, 00:10:22.328 "data_size": 63488 00:10:22.328 }, 00:10:22.328 { 00:10:22.328 "name": "pt4", 00:10:22.328 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:22.328 "is_configured": true, 00:10:22.328 "data_offset": 2048, 00:10:22.328 "data_size": 63488 00:10:22.328 } 00:10:22.328 ] 00:10:22.328 }' 00:10:22.328 13:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:22.328 13:20:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.922 13:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:10:22.922 13:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:22.922 13:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:22.922 13:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:22.922 13:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:22.922 13:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:22.922 13:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:22.922 13:20:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.922 13:20:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.922 13:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:22.922 [2024-11-17 13:20:11.855335] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:22.922 13:20:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.922 13:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:22.922 "name": "raid_bdev1", 00:10:22.922 "aliases": [ 00:10:22.922 "bbfc3ef2-ae1e-4ec6-9516-bfa0351a61e4" 00:10:22.922 ], 00:10:22.922 "product_name": "Raid Volume", 00:10:22.922 "block_size": 512, 00:10:22.922 "num_blocks": 253952, 00:10:22.922 "uuid": "bbfc3ef2-ae1e-4ec6-9516-bfa0351a61e4", 00:10:22.922 "assigned_rate_limits": { 00:10:22.922 "rw_ios_per_sec": 0, 00:10:22.922 "rw_mbytes_per_sec": 0, 00:10:22.922 "r_mbytes_per_sec": 0, 00:10:22.922 "w_mbytes_per_sec": 0 00:10:22.922 }, 00:10:22.922 "claimed": false, 00:10:22.922 "zoned": false, 00:10:22.922 "supported_io_types": { 00:10:22.922 "read": true, 00:10:22.922 "write": true, 00:10:22.922 "unmap": true, 00:10:22.922 "flush": true, 00:10:22.922 "reset": true, 00:10:22.922 "nvme_admin": false, 00:10:22.922 "nvme_io": false, 00:10:22.922 "nvme_io_md": false, 00:10:22.922 "write_zeroes": true, 00:10:22.922 "zcopy": false, 00:10:22.922 "get_zone_info": false, 00:10:22.922 "zone_management": false, 00:10:22.922 "zone_append": false, 00:10:22.922 "compare": false, 00:10:22.922 "compare_and_write": false, 00:10:22.922 "abort": false, 00:10:22.922 "seek_hole": false, 00:10:22.922 "seek_data": false, 00:10:22.922 "copy": false, 00:10:22.922 "nvme_iov_md": false 00:10:22.922 }, 00:10:22.922 "memory_domains": [ 00:10:22.922 { 00:10:22.922 "dma_device_id": "system", 00:10:22.922 "dma_device_type": 1 00:10:22.922 }, 00:10:22.922 { 00:10:22.922 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:22.922 "dma_device_type": 2 00:10:22.922 }, 00:10:22.922 { 00:10:22.922 "dma_device_id": "system", 00:10:22.922 "dma_device_type": 1 00:10:22.922 }, 00:10:22.922 { 00:10:22.922 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:22.922 "dma_device_type": 2 00:10:22.922 }, 00:10:22.922 { 00:10:22.922 "dma_device_id": "system", 00:10:22.922 "dma_device_type": 1 00:10:22.922 }, 00:10:22.922 { 00:10:22.922 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:22.922 "dma_device_type": 2 00:10:22.922 }, 00:10:22.922 { 00:10:22.922 "dma_device_id": "system", 00:10:22.922 "dma_device_type": 1 00:10:22.922 }, 00:10:22.922 { 00:10:22.922 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:22.922 "dma_device_type": 2 00:10:22.922 } 00:10:22.922 ], 00:10:22.922 "driver_specific": { 00:10:22.922 "raid": { 00:10:22.922 "uuid": "bbfc3ef2-ae1e-4ec6-9516-bfa0351a61e4", 00:10:22.922 "strip_size_kb": 64, 00:10:22.922 "state": "online", 00:10:22.922 "raid_level": "raid0", 00:10:22.922 "superblock": true, 00:10:22.922 "num_base_bdevs": 4, 00:10:22.922 "num_base_bdevs_discovered": 4, 00:10:22.922 "num_base_bdevs_operational": 4, 00:10:22.922 "base_bdevs_list": [ 00:10:22.922 { 00:10:22.922 "name": "pt1", 00:10:22.922 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:22.922 "is_configured": true, 00:10:22.922 "data_offset": 2048, 00:10:22.922 "data_size": 63488 00:10:22.922 }, 00:10:22.922 { 00:10:22.922 "name": "pt2", 00:10:22.922 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:22.923 "is_configured": true, 00:10:22.923 "data_offset": 2048, 00:10:22.923 "data_size": 63488 00:10:22.923 }, 00:10:22.923 { 00:10:22.923 "name": "pt3", 00:10:22.923 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:22.923 "is_configured": true, 00:10:22.923 "data_offset": 2048, 00:10:22.923 "data_size": 63488 00:10:22.923 }, 00:10:22.923 { 00:10:22.923 "name": "pt4", 00:10:22.923 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:22.923 "is_configured": true, 00:10:22.923 "data_offset": 2048, 00:10:22.923 "data_size": 63488 00:10:22.923 } 00:10:22.923 ] 00:10:22.923 } 00:10:22.923 } 00:10:22.923 }' 00:10:22.923 13:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:22.923 13:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:22.923 pt2 00:10:22.923 pt3 00:10:22.923 pt4' 00:10:22.923 13:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:22.923 13:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:22.923 13:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:22.923 13:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:22.923 13:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:22.923 13:20:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.923 13:20:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.923 13:20:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.923 13:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:22.923 13:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:22.923 13:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:22.923 13:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:22.923 13:20:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.923 13:20:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.923 13:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:22.923 13:20:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.923 13:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:22.923 13:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:22.923 13:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:22.923 13:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:22.923 13:20:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.923 13:20:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.923 13:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:22.923 13:20:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.923 13:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:22.923 13:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:22.923 13:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:22.923 13:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:10:22.923 13:20:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.923 13:20:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.923 13:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:23.191 13:20:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.191 13:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:23.191 13:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:23.191 13:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:23.191 13:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:10:23.191 13:20:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.191 13:20:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.191 [2024-11-17 13:20:12.174729] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:23.191 13:20:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.191 13:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=bbfc3ef2-ae1e-4ec6-9516-bfa0351a61e4 00:10:23.191 13:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z bbfc3ef2-ae1e-4ec6-9516-bfa0351a61e4 ']' 00:10:23.191 13:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:23.191 13:20:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.191 13:20:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.191 [2024-11-17 13:20:12.218371] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:23.191 [2024-11-17 13:20:12.218453] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:23.191 [2024-11-17 13:20:12.218555] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:23.191 [2024-11-17 13:20:12.218662] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:23.191 [2024-11-17 13:20:12.218745] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:10:23.191 13:20:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.191 13:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:10:23.191 13:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:23.191 13:20:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.191 13:20:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.191 13:20:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.191 13:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:10:23.191 13:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:10:23.191 13:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:23.191 13:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:10:23.191 13:20:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.191 13:20:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.191 13:20:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.191 13:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:23.191 13:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:10:23.191 13:20:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.191 13:20:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.191 13:20:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.191 13:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:23.191 13:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:10:23.191 13:20:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.191 13:20:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.191 13:20:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.191 13:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:23.191 13:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:10:23.191 13:20:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.191 13:20:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.191 13:20:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.191 13:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:23.191 13:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:10:23.191 13:20:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.191 13:20:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.191 13:20:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.191 13:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:10:23.191 13:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:23.191 13:20:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:10:23.191 13:20:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:23.191 13:20:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:10:23.191 13:20:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:23.191 13:20:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:10:23.191 13:20:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:23.191 13:20:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:23.191 13:20:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.191 13:20:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.191 [2024-11-17 13:20:12.382119] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:23.191 [2024-11-17 13:20:12.383973] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:23.191 [2024-11-17 13:20:12.384065] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:10:23.191 [2024-11-17 13:20:12.384117] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:10:23.191 [2024-11-17 13:20:12.384201] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:23.191 [2024-11-17 13:20:12.384324] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:23.191 [2024-11-17 13:20:12.384401] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:10:23.191 [2024-11-17 13:20:12.384471] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:10:23.191 [2024-11-17 13:20:12.384539] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:23.191 [2024-11-17 13:20:12.384583] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:10:23.191 request: 00:10:23.191 { 00:10:23.191 "name": "raid_bdev1", 00:10:23.191 "raid_level": "raid0", 00:10:23.191 "base_bdevs": [ 00:10:23.191 "malloc1", 00:10:23.191 "malloc2", 00:10:23.191 "malloc3", 00:10:23.191 "malloc4" 00:10:23.191 ], 00:10:23.191 "strip_size_kb": 64, 00:10:23.191 "superblock": false, 00:10:23.191 "method": "bdev_raid_create", 00:10:23.191 "req_id": 1 00:10:23.191 } 00:10:23.191 Got JSON-RPC error response 00:10:23.191 response: 00:10:23.191 { 00:10:23.191 "code": -17, 00:10:23.191 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:23.191 } 00:10:23.191 13:20:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:23.191 13:20:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:10:23.191 13:20:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:23.191 13:20:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:23.191 13:20:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:23.191 13:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:23.192 13:20:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.192 13:20:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.192 13:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:10:23.192 13:20:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.452 13:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:10:23.452 13:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:10:23.452 13:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:23.452 13:20:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.452 13:20:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.452 [2024-11-17 13:20:12.449957] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:23.452 [2024-11-17 13:20:12.450016] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:23.452 [2024-11-17 13:20:12.450032] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:23.452 [2024-11-17 13:20:12.450043] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:23.452 [2024-11-17 13:20:12.452216] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:23.452 [2024-11-17 13:20:12.452264] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:23.452 [2024-11-17 13:20:12.452343] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:23.452 [2024-11-17 13:20:12.452463] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:23.452 pt1 00:10:23.452 13:20:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.452 13:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:10:23.452 13:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:23.452 13:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:23.452 13:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:23.452 13:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:23.452 13:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:23.452 13:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:23.452 13:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:23.452 13:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:23.452 13:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:23.452 13:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:23.452 13:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:23.452 13:20:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.452 13:20:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.452 13:20:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.452 13:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:23.452 "name": "raid_bdev1", 00:10:23.452 "uuid": "bbfc3ef2-ae1e-4ec6-9516-bfa0351a61e4", 00:10:23.452 "strip_size_kb": 64, 00:10:23.452 "state": "configuring", 00:10:23.452 "raid_level": "raid0", 00:10:23.452 "superblock": true, 00:10:23.452 "num_base_bdevs": 4, 00:10:23.452 "num_base_bdevs_discovered": 1, 00:10:23.452 "num_base_bdevs_operational": 4, 00:10:23.452 "base_bdevs_list": [ 00:10:23.452 { 00:10:23.452 "name": "pt1", 00:10:23.452 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:23.452 "is_configured": true, 00:10:23.452 "data_offset": 2048, 00:10:23.452 "data_size": 63488 00:10:23.452 }, 00:10:23.452 { 00:10:23.452 "name": null, 00:10:23.452 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:23.452 "is_configured": false, 00:10:23.452 "data_offset": 2048, 00:10:23.452 "data_size": 63488 00:10:23.452 }, 00:10:23.452 { 00:10:23.452 "name": null, 00:10:23.452 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:23.452 "is_configured": false, 00:10:23.452 "data_offset": 2048, 00:10:23.452 "data_size": 63488 00:10:23.452 }, 00:10:23.452 { 00:10:23.452 "name": null, 00:10:23.452 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:23.452 "is_configured": false, 00:10:23.452 "data_offset": 2048, 00:10:23.452 "data_size": 63488 00:10:23.452 } 00:10:23.452 ] 00:10:23.452 }' 00:10:23.452 13:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:23.452 13:20:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.714 13:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:10:23.714 13:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:23.714 13:20:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.714 13:20:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.714 [2024-11-17 13:20:12.877256] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:23.714 [2024-11-17 13:20:12.877386] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:23.714 [2024-11-17 13:20:12.877422] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:10:23.714 [2024-11-17 13:20:12.877452] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:23.714 [2024-11-17 13:20:12.877926] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:23.714 [2024-11-17 13:20:12.877990] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:23.714 [2024-11-17 13:20:12.878115] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:23.714 [2024-11-17 13:20:12.878169] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:23.714 pt2 00:10:23.714 13:20:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.714 13:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:10:23.714 13:20:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.714 13:20:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.714 [2024-11-17 13:20:12.885236] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:10:23.715 13:20:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.715 13:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:10:23.715 13:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:23.715 13:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:23.715 13:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:23.715 13:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:23.715 13:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:23.715 13:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:23.715 13:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:23.715 13:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:23.715 13:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:23.715 13:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:23.715 13:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:23.715 13:20:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.715 13:20:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.715 13:20:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.974 13:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:23.974 "name": "raid_bdev1", 00:10:23.974 "uuid": "bbfc3ef2-ae1e-4ec6-9516-bfa0351a61e4", 00:10:23.974 "strip_size_kb": 64, 00:10:23.974 "state": "configuring", 00:10:23.974 "raid_level": "raid0", 00:10:23.974 "superblock": true, 00:10:23.974 "num_base_bdevs": 4, 00:10:23.974 "num_base_bdevs_discovered": 1, 00:10:23.974 "num_base_bdevs_operational": 4, 00:10:23.974 "base_bdevs_list": [ 00:10:23.974 { 00:10:23.974 "name": "pt1", 00:10:23.974 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:23.974 "is_configured": true, 00:10:23.974 "data_offset": 2048, 00:10:23.974 "data_size": 63488 00:10:23.974 }, 00:10:23.974 { 00:10:23.974 "name": null, 00:10:23.974 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:23.974 "is_configured": false, 00:10:23.974 "data_offset": 0, 00:10:23.974 "data_size": 63488 00:10:23.974 }, 00:10:23.974 { 00:10:23.974 "name": null, 00:10:23.974 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:23.974 "is_configured": false, 00:10:23.974 "data_offset": 2048, 00:10:23.974 "data_size": 63488 00:10:23.974 }, 00:10:23.974 { 00:10:23.974 "name": null, 00:10:23.974 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:23.974 "is_configured": false, 00:10:23.974 "data_offset": 2048, 00:10:23.974 "data_size": 63488 00:10:23.974 } 00:10:23.974 ] 00:10:23.974 }' 00:10:23.974 13:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:23.974 13:20:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.234 13:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:10:24.234 13:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:24.234 13:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:24.234 13:20:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.234 13:20:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.234 [2024-11-17 13:20:13.328503] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:24.234 [2024-11-17 13:20:13.328658] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:24.234 [2024-11-17 13:20:13.328681] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:10:24.234 [2024-11-17 13:20:13.328690] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:24.234 [2024-11-17 13:20:13.329144] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:24.234 [2024-11-17 13:20:13.329163] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:24.234 [2024-11-17 13:20:13.329272] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:24.234 [2024-11-17 13:20:13.329298] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:24.234 pt2 00:10:24.234 13:20:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.234 13:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:24.234 13:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:24.234 13:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:24.234 13:20:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.234 13:20:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.234 [2024-11-17 13:20:13.336441] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:24.234 [2024-11-17 13:20:13.336489] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:24.234 [2024-11-17 13:20:13.336529] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:10:24.234 [2024-11-17 13:20:13.336539] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:24.234 [2024-11-17 13:20:13.336888] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:24.234 [2024-11-17 13:20:13.336903] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:24.234 [2024-11-17 13:20:13.336969] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:24.234 [2024-11-17 13:20:13.337004] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:24.234 pt3 00:10:24.234 13:20:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.234 13:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:24.234 13:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:24.234 13:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:24.234 13:20:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.234 13:20:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.234 [2024-11-17 13:20:13.344407] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:24.234 [2024-11-17 13:20:13.344468] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:24.234 [2024-11-17 13:20:13.344508] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:10:24.234 [2024-11-17 13:20:13.344516] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:24.234 [2024-11-17 13:20:13.344854] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:24.234 [2024-11-17 13:20:13.344869] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:24.234 [2024-11-17 13:20:13.344925] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:10:24.234 [2024-11-17 13:20:13.344949] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:24.234 [2024-11-17 13:20:13.345092] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:24.234 [2024-11-17 13:20:13.345100] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:24.234 [2024-11-17 13:20:13.345323] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:10:24.234 [2024-11-17 13:20:13.345492] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:24.234 [2024-11-17 13:20:13.345519] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:10:24.235 [2024-11-17 13:20:13.345644] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:24.235 pt4 00:10:24.235 13:20:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.235 13:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:24.235 13:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:24.235 13:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:24.235 13:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:24.235 13:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:24.235 13:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:24.235 13:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:24.235 13:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:24.235 13:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:24.235 13:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:24.235 13:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:24.235 13:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:24.235 13:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:24.235 13:20:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.235 13:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:24.235 13:20:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.235 13:20:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.235 13:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:24.235 "name": "raid_bdev1", 00:10:24.235 "uuid": "bbfc3ef2-ae1e-4ec6-9516-bfa0351a61e4", 00:10:24.235 "strip_size_kb": 64, 00:10:24.235 "state": "online", 00:10:24.235 "raid_level": "raid0", 00:10:24.235 "superblock": true, 00:10:24.235 "num_base_bdevs": 4, 00:10:24.235 "num_base_bdevs_discovered": 4, 00:10:24.235 "num_base_bdevs_operational": 4, 00:10:24.235 "base_bdevs_list": [ 00:10:24.235 { 00:10:24.235 "name": "pt1", 00:10:24.235 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:24.235 "is_configured": true, 00:10:24.235 "data_offset": 2048, 00:10:24.235 "data_size": 63488 00:10:24.235 }, 00:10:24.235 { 00:10:24.235 "name": "pt2", 00:10:24.235 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:24.235 "is_configured": true, 00:10:24.235 "data_offset": 2048, 00:10:24.235 "data_size": 63488 00:10:24.235 }, 00:10:24.235 { 00:10:24.235 "name": "pt3", 00:10:24.235 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:24.235 "is_configured": true, 00:10:24.235 "data_offset": 2048, 00:10:24.235 "data_size": 63488 00:10:24.235 }, 00:10:24.235 { 00:10:24.235 "name": "pt4", 00:10:24.235 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:24.235 "is_configured": true, 00:10:24.235 "data_offset": 2048, 00:10:24.235 "data_size": 63488 00:10:24.235 } 00:10:24.235 ] 00:10:24.235 }' 00:10:24.235 13:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:24.235 13:20:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.804 13:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:10:24.804 13:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:24.804 13:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:24.804 13:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:24.804 13:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:24.804 13:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:24.804 13:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:24.804 13:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:24.804 13:20:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.804 13:20:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.804 [2024-11-17 13:20:13.780018] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:24.804 13:20:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.804 13:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:24.804 "name": "raid_bdev1", 00:10:24.804 "aliases": [ 00:10:24.804 "bbfc3ef2-ae1e-4ec6-9516-bfa0351a61e4" 00:10:24.804 ], 00:10:24.804 "product_name": "Raid Volume", 00:10:24.804 "block_size": 512, 00:10:24.804 "num_blocks": 253952, 00:10:24.804 "uuid": "bbfc3ef2-ae1e-4ec6-9516-bfa0351a61e4", 00:10:24.804 "assigned_rate_limits": { 00:10:24.804 "rw_ios_per_sec": 0, 00:10:24.804 "rw_mbytes_per_sec": 0, 00:10:24.804 "r_mbytes_per_sec": 0, 00:10:24.804 "w_mbytes_per_sec": 0 00:10:24.804 }, 00:10:24.804 "claimed": false, 00:10:24.804 "zoned": false, 00:10:24.804 "supported_io_types": { 00:10:24.804 "read": true, 00:10:24.804 "write": true, 00:10:24.804 "unmap": true, 00:10:24.804 "flush": true, 00:10:24.804 "reset": true, 00:10:24.804 "nvme_admin": false, 00:10:24.804 "nvme_io": false, 00:10:24.804 "nvme_io_md": false, 00:10:24.804 "write_zeroes": true, 00:10:24.804 "zcopy": false, 00:10:24.804 "get_zone_info": false, 00:10:24.804 "zone_management": false, 00:10:24.804 "zone_append": false, 00:10:24.804 "compare": false, 00:10:24.804 "compare_and_write": false, 00:10:24.804 "abort": false, 00:10:24.804 "seek_hole": false, 00:10:24.804 "seek_data": false, 00:10:24.804 "copy": false, 00:10:24.804 "nvme_iov_md": false 00:10:24.804 }, 00:10:24.804 "memory_domains": [ 00:10:24.804 { 00:10:24.804 "dma_device_id": "system", 00:10:24.804 "dma_device_type": 1 00:10:24.804 }, 00:10:24.804 { 00:10:24.804 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:24.804 "dma_device_type": 2 00:10:24.804 }, 00:10:24.804 { 00:10:24.804 "dma_device_id": "system", 00:10:24.804 "dma_device_type": 1 00:10:24.804 }, 00:10:24.804 { 00:10:24.804 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:24.804 "dma_device_type": 2 00:10:24.804 }, 00:10:24.804 { 00:10:24.804 "dma_device_id": "system", 00:10:24.804 "dma_device_type": 1 00:10:24.804 }, 00:10:24.804 { 00:10:24.804 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:24.804 "dma_device_type": 2 00:10:24.804 }, 00:10:24.804 { 00:10:24.804 "dma_device_id": "system", 00:10:24.804 "dma_device_type": 1 00:10:24.804 }, 00:10:24.804 { 00:10:24.804 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:24.804 "dma_device_type": 2 00:10:24.804 } 00:10:24.804 ], 00:10:24.804 "driver_specific": { 00:10:24.804 "raid": { 00:10:24.804 "uuid": "bbfc3ef2-ae1e-4ec6-9516-bfa0351a61e4", 00:10:24.804 "strip_size_kb": 64, 00:10:24.804 "state": "online", 00:10:24.804 "raid_level": "raid0", 00:10:24.804 "superblock": true, 00:10:24.804 "num_base_bdevs": 4, 00:10:24.804 "num_base_bdevs_discovered": 4, 00:10:24.804 "num_base_bdevs_operational": 4, 00:10:24.804 "base_bdevs_list": [ 00:10:24.804 { 00:10:24.804 "name": "pt1", 00:10:24.804 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:24.804 "is_configured": true, 00:10:24.804 "data_offset": 2048, 00:10:24.804 "data_size": 63488 00:10:24.804 }, 00:10:24.804 { 00:10:24.804 "name": "pt2", 00:10:24.804 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:24.804 "is_configured": true, 00:10:24.804 "data_offset": 2048, 00:10:24.804 "data_size": 63488 00:10:24.804 }, 00:10:24.804 { 00:10:24.804 "name": "pt3", 00:10:24.804 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:24.804 "is_configured": true, 00:10:24.804 "data_offset": 2048, 00:10:24.804 "data_size": 63488 00:10:24.804 }, 00:10:24.804 { 00:10:24.804 "name": "pt4", 00:10:24.804 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:24.804 "is_configured": true, 00:10:24.804 "data_offset": 2048, 00:10:24.804 "data_size": 63488 00:10:24.804 } 00:10:24.804 ] 00:10:24.804 } 00:10:24.804 } 00:10:24.804 }' 00:10:24.804 13:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:24.804 13:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:24.804 pt2 00:10:24.804 pt3 00:10:24.804 pt4' 00:10:24.804 13:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:24.804 13:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:24.804 13:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:24.804 13:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:24.804 13:20:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.804 13:20:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.804 13:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:24.804 13:20:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.804 13:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:24.804 13:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:24.805 13:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:24.805 13:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:24.805 13:20:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.805 13:20:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.805 13:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:24.805 13:20:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.805 13:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:24.805 13:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:24.805 13:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:24.805 13:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:24.805 13:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:24.805 13:20:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.805 13:20:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.805 13:20:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.064 13:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:25.064 13:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:25.064 13:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:25.064 13:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:10:25.064 13:20:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.064 13:20:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.064 13:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:25.064 13:20:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.064 13:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:25.064 13:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:25.064 13:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:25.064 13:20:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.064 13:20:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.064 13:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:10:25.064 [2024-11-17 13:20:14.103471] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:25.064 13:20:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.064 13:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' bbfc3ef2-ae1e-4ec6-9516-bfa0351a61e4 '!=' bbfc3ef2-ae1e-4ec6-9516-bfa0351a61e4 ']' 00:10:25.064 13:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:10:25.064 13:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:25.064 13:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:25.064 13:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 70643 00:10:25.064 13:20:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 70643 ']' 00:10:25.064 13:20:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 70643 00:10:25.064 13:20:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:10:25.064 13:20:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:25.065 13:20:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70643 00:10:25.065 13:20:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:25.065 13:20:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:25.065 13:20:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70643' 00:10:25.065 killing process with pid 70643 00:10:25.065 13:20:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 70643 00:10:25.065 [2024-11-17 13:20:14.177848] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:25.065 13:20:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 70643 00:10:25.065 [2024-11-17 13:20:14.178026] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:25.065 [2024-11-17 13:20:14.178120] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:25.065 [2024-11-17 13:20:14.178130] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:10:25.634 [2024-11-17 13:20:14.560925] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:26.573 13:20:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:10:26.573 00:10:26.573 real 0m5.409s 00:10:26.573 user 0m7.715s 00:10:26.573 sys 0m0.924s 00:10:26.573 13:20:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:26.573 13:20:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.573 ************************************ 00:10:26.573 END TEST raid_superblock_test 00:10:26.573 ************************************ 00:10:26.573 13:20:15 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 4 read 00:10:26.573 13:20:15 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:26.573 13:20:15 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:26.573 13:20:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:26.573 ************************************ 00:10:26.573 START TEST raid_read_error_test 00:10:26.573 ************************************ 00:10:26.573 13:20:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 4 read 00:10:26.573 13:20:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:10:26.573 13:20:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:10:26.573 13:20:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:10:26.573 13:20:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:26.573 13:20:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:26.573 13:20:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:26.573 13:20:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:26.573 13:20:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:26.573 13:20:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:26.573 13:20:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:26.573 13:20:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:26.573 13:20:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:26.573 13:20:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:26.573 13:20:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:26.573 13:20:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:10:26.573 13:20:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:26.573 13:20:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:26.573 13:20:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:26.573 13:20:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:26.573 13:20:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:26.573 13:20:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:26.573 13:20:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:26.573 13:20:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:26.573 13:20:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:26.573 13:20:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:10:26.573 13:20:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:26.573 13:20:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:26.573 13:20:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:26.573 13:20:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.nAcmyEiVpS 00:10:26.574 13:20:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=70907 00:10:26.574 13:20:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:26.574 13:20:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 70907 00:10:26.574 13:20:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 70907 ']' 00:10:26.574 13:20:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:26.574 13:20:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:26.574 13:20:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:26.574 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:26.574 13:20:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:26.574 13:20:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.834 [2024-11-17 13:20:15.827042] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:10:26.834 [2024-11-17 13:20:15.827169] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70907 ] 00:10:26.834 [2024-11-17 13:20:16.007384] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:27.093 [2024-11-17 13:20:16.117135] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:27.353 [2024-11-17 13:20:16.323847] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:27.353 [2024-11-17 13:20:16.323917] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:27.612 13:20:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:27.612 13:20:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:27.612 13:20:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:27.612 13:20:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:27.612 13:20:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.612 13:20:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.612 BaseBdev1_malloc 00:10:27.612 13:20:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.612 13:20:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:27.612 13:20:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.612 13:20:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.612 true 00:10:27.612 13:20:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.612 13:20:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:27.612 13:20:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.612 13:20:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.612 [2024-11-17 13:20:16.711401] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:27.612 [2024-11-17 13:20:16.711458] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:27.612 [2024-11-17 13:20:16.711479] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:27.612 [2024-11-17 13:20:16.711490] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:27.612 [2024-11-17 13:20:16.713620] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:27.612 [2024-11-17 13:20:16.713655] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:27.612 BaseBdev1 00:10:27.612 13:20:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.612 13:20:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:27.612 13:20:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:27.612 13:20:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.612 13:20:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.612 BaseBdev2_malloc 00:10:27.612 13:20:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.612 13:20:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:27.612 13:20:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.612 13:20:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.612 true 00:10:27.612 13:20:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.612 13:20:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:27.612 13:20:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.612 13:20:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.612 [2024-11-17 13:20:16.780915] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:27.612 [2024-11-17 13:20:16.780978] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:27.612 [2024-11-17 13:20:16.780996] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:27.612 [2024-11-17 13:20:16.781009] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:27.612 [2024-11-17 13:20:16.783336] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:27.612 [2024-11-17 13:20:16.783371] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:27.612 BaseBdev2 00:10:27.612 13:20:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.612 13:20:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:27.612 13:20:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:27.612 13:20:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.612 13:20:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.872 BaseBdev3_malloc 00:10:27.872 13:20:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.872 13:20:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:27.872 13:20:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.872 13:20:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.872 true 00:10:27.872 13:20:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.872 13:20:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:27.872 13:20:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.872 13:20:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.872 [2024-11-17 13:20:16.858896] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:27.872 [2024-11-17 13:20:16.858965] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:27.872 [2024-11-17 13:20:16.858984] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:27.872 [2024-11-17 13:20:16.858995] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:27.872 [2024-11-17 13:20:16.861291] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:27.872 [2024-11-17 13:20:16.861326] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:27.872 BaseBdev3 00:10:27.872 13:20:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.872 13:20:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:27.872 13:20:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:10:27.872 13:20:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.872 13:20:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.872 BaseBdev4_malloc 00:10:27.872 13:20:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.872 13:20:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:10:27.872 13:20:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.872 13:20:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.872 true 00:10:27.872 13:20:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.872 13:20:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:10:27.872 13:20:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.872 13:20:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.872 [2024-11-17 13:20:16.925701] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:10:27.872 [2024-11-17 13:20:16.925758] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:27.872 [2024-11-17 13:20:16.925777] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:27.872 [2024-11-17 13:20:16.925789] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:27.872 [2024-11-17 13:20:16.927848] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:27.872 [2024-11-17 13:20:16.927887] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:10:27.872 BaseBdev4 00:10:27.872 13:20:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.872 13:20:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:10:27.872 13:20:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.872 13:20:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.872 [2024-11-17 13:20:16.937739] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:27.872 [2024-11-17 13:20:16.939571] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:27.872 [2024-11-17 13:20:16.939643] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:27.872 [2024-11-17 13:20:16.939708] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:27.872 [2024-11-17 13:20:16.939940] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:10:27.872 [2024-11-17 13:20:16.939963] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:27.872 [2024-11-17 13:20:16.940231] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:10:27.872 [2024-11-17 13:20:16.940405] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:10:27.872 [2024-11-17 13:20:16.940424] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:10:27.872 [2024-11-17 13:20:16.940588] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:27.872 13:20:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.872 13:20:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:27.872 13:20:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:27.872 13:20:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:27.872 13:20:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:27.872 13:20:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:27.872 13:20:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:27.872 13:20:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:27.872 13:20:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:27.872 13:20:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:27.872 13:20:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:27.872 13:20:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:27.872 13:20:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:27.872 13:20:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.872 13:20:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.872 13:20:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.872 13:20:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:27.872 "name": "raid_bdev1", 00:10:27.872 "uuid": "8e0f14a7-8da8-4d83-adda-b5d4c6de0f02", 00:10:27.872 "strip_size_kb": 64, 00:10:27.872 "state": "online", 00:10:27.872 "raid_level": "raid0", 00:10:27.872 "superblock": true, 00:10:27.872 "num_base_bdevs": 4, 00:10:27.872 "num_base_bdevs_discovered": 4, 00:10:27.872 "num_base_bdevs_operational": 4, 00:10:27.872 "base_bdevs_list": [ 00:10:27.873 { 00:10:27.873 "name": "BaseBdev1", 00:10:27.873 "uuid": "7328d015-a40d-5a5a-a986-83332213371e", 00:10:27.873 "is_configured": true, 00:10:27.873 "data_offset": 2048, 00:10:27.873 "data_size": 63488 00:10:27.873 }, 00:10:27.873 { 00:10:27.873 "name": "BaseBdev2", 00:10:27.873 "uuid": "bae12f4e-093f-557f-9a12-ba2f7a996cd7", 00:10:27.873 "is_configured": true, 00:10:27.873 "data_offset": 2048, 00:10:27.873 "data_size": 63488 00:10:27.873 }, 00:10:27.873 { 00:10:27.873 "name": "BaseBdev3", 00:10:27.873 "uuid": "971872f1-d885-50b3-a194-a0a5dd36a110", 00:10:27.873 "is_configured": true, 00:10:27.873 "data_offset": 2048, 00:10:27.873 "data_size": 63488 00:10:27.873 }, 00:10:27.873 { 00:10:27.873 "name": "BaseBdev4", 00:10:27.873 "uuid": "fae247b1-9f14-5501-a115-e19e4d89b6ef", 00:10:27.873 "is_configured": true, 00:10:27.873 "data_offset": 2048, 00:10:27.873 "data_size": 63488 00:10:27.873 } 00:10:27.873 ] 00:10:27.873 }' 00:10:27.873 13:20:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:27.873 13:20:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.442 13:20:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:28.442 13:20:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:28.442 [2024-11-17 13:20:17.446202] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:10:29.420 13:20:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:29.420 13:20:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.420 13:20:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.420 13:20:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.420 13:20:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:29.421 13:20:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:10:29.421 13:20:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:10:29.421 13:20:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:29.421 13:20:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:29.421 13:20:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:29.421 13:20:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:29.421 13:20:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:29.421 13:20:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:29.421 13:20:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:29.421 13:20:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:29.421 13:20:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:29.421 13:20:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:29.421 13:20:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:29.421 13:20:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.421 13:20:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:29.421 13:20:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.421 13:20:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.421 13:20:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:29.421 "name": "raid_bdev1", 00:10:29.421 "uuid": "8e0f14a7-8da8-4d83-adda-b5d4c6de0f02", 00:10:29.421 "strip_size_kb": 64, 00:10:29.421 "state": "online", 00:10:29.421 "raid_level": "raid0", 00:10:29.421 "superblock": true, 00:10:29.421 "num_base_bdevs": 4, 00:10:29.421 "num_base_bdevs_discovered": 4, 00:10:29.421 "num_base_bdevs_operational": 4, 00:10:29.421 "base_bdevs_list": [ 00:10:29.421 { 00:10:29.421 "name": "BaseBdev1", 00:10:29.421 "uuid": "7328d015-a40d-5a5a-a986-83332213371e", 00:10:29.421 "is_configured": true, 00:10:29.421 "data_offset": 2048, 00:10:29.421 "data_size": 63488 00:10:29.421 }, 00:10:29.421 { 00:10:29.421 "name": "BaseBdev2", 00:10:29.421 "uuid": "bae12f4e-093f-557f-9a12-ba2f7a996cd7", 00:10:29.421 "is_configured": true, 00:10:29.421 "data_offset": 2048, 00:10:29.421 "data_size": 63488 00:10:29.421 }, 00:10:29.421 { 00:10:29.421 "name": "BaseBdev3", 00:10:29.421 "uuid": "971872f1-d885-50b3-a194-a0a5dd36a110", 00:10:29.421 "is_configured": true, 00:10:29.421 "data_offset": 2048, 00:10:29.421 "data_size": 63488 00:10:29.421 }, 00:10:29.421 { 00:10:29.421 "name": "BaseBdev4", 00:10:29.421 "uuid": "fae247b1-9f14-5501-a115-e19e4d89b6ef", 00:10:29.421 "is_configured": true, 00:10:29.421 "data_offset": 2048, 00:10:29.421 "data_size": 63488 00:10:29.421 } 00:10:29.421 ] 00:10:29.421 }' 00:10:29.421 13:20:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:29.421 13:20:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.681 13:20:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:29.681 13:20:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.681 13:20:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.681 [2024-11-17 13:20:18.868619] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:29.681 [2024-11-17 13:20:18.868661] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:29.681 [2024-11-17 13:20:18.871235] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:29.681 [2024-11-17 13:20:18.871296] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:29.681 [2024-11-17 13:20:18.871338] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:29.681 [2024-11-17 13:20:18.871349] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:10:29.681 { 00:10:29.681 "results": [ 00:10:29.681 { 00:10:29.681 "job": "raid_bdev1", 00:10:29.681 "core_mask": "0x1", 00:10:29.681 "workload": "randrw", 00:10:29.681 "percentage": 50, 00:10:29.681 "status": "finished", 00:10:29.681 "queue_depth": 1, 00:10:29.681 "io_size": 131072, 00:10:29.681 "runtime": 1.423395, 00:10:29.681 "iops": 16230.210166538453, 00:10:29.681 "mibps": 2028.7762708173066, 00:10:29.681 "io_failed": 1, 00:10:29.681 "io_timeout": 0, 00:10:29.681 "avg_latency_us": 85.62030459001998, 00:10:29.681 "min_latency_us": 25.2646288209607, 00:10:29.681 "max_latency_us": 1402.2986899563318 00:10:29.681 } 00:10:29.681 ], 00:10:29.681 "core_count": 1 00:10:29.681 } 00:10:29.681 13:20:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.681 13:20:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 70907 00:10:29.681 13:20:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 70907 ']' 00:10:29.681 13:20:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 70907 00:10:29.681 13:20:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:10:29.681 13:20:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:29.681 13:20:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70907 00:10:29.941 killing process with pid 70907 00:10:29.941 13:20:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:29.941 13:20:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:29.941 13:20:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70907' 00:10:29.941 13:20:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 70907 00:10:29.941 [2024-11-17 13:20:18.909022] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:29.941 13:20:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 70907 00:10:30.199 [2024-11-17 13:20:19.229211] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:31.579 13:20:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.nAcmyEiVpS 00:10:31.579 13:20:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:31.579 13:20:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:31.579 13:20:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.70 00:10:31.579 13:20:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:10:31.579 ************************************ 00:10:31.579 END TEST raid_read_error_test 00:10:31.579 ************************************ 00:10:31.579 13:20:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:31.579 13:20:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:31.579 13:20:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.70 != \0\.\0\0 ]] 00:10:31.579 00:10:31.579 real 0m4.687s 00:10:31.579 user 0m5.520s 00:10:31.579 sys 0m0.601s 00:10:31.579 13:20:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:31.579 13:20:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.579 13:20:20 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 4 write 00:10:31.579 13:20:20 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:31.579 13:20:20 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:31.579 13:20:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:31.579 ************************************ 00:10:31.579 START TEST raid_write_error_test 00:10:31.579 ************************************ 00:10:31.579 13:20:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 4 write 00:10:31.579 13:20:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:10:31.579 13:20:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:10:31.579 13:20:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:10:31.579 13:20:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:31.579 13:20:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:31.579 13:20:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:31.579 13:20:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:31.579 13:20:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:31.579 13:20:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:31.579 13:20:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:31.579 13:20:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:31.579 13:20:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:31.579 13:20:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:31.579 13:20:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:31.579 13:20:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:10:31.579 13:20:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:31.579 13:20:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:31.579 13:20:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:31.579 13:20:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:31.579 13:20:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:31.579 13:20:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:31.579 13:20:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:31.579 13:20:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:31.579 13:20:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:31.579 13:20:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:10:31.579 13:20:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:31.579 13:20:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:31.579 13:20:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:31.579 13:20:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.Fwf7O3GJVb 00:10:31.579 13:20:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=71049 00:10:31.579 13:20:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:31.579 13:20:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 71049 00:10:31.579 13:20:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 71049 ']' 00:10:31.579 13:20:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:31.579 13:20:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:31.579 13:20:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:31.579 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:31.579 13:20:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:31.579 13:20:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.579 [2024-11-17 13:20:20.582507] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:10:31.579 [2024-11-17 13:20:20.582618] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71049 ] 00:10:31.579 [2024-11-17 13:20:20.739539] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:31.839 [2024-11-17 13:20:20.851130] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:31.839 [2024-11-17 13:20:21.057268] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:31.839 [2024-11-17 13:20:21.057318] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:32.408 13:20:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:32.408 13:20:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:32.408 13:20:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:32.408 13:20:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:32.408 13:20:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.408 13:20:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.408 BaseBdev1_malloc 00:10:32.408 13:20:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.408 13:20:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:32.408 13:20:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.408 13:20:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.408 true 00:10:32.408 13:20:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.408 13:20:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:32.408 13:20:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.408 13:20:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.408 [2024-11-17 13:20:21.481233] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:32.408 [2024-11-17 13:20:21.481360] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:32.408 [2024-11-17 13:20:21.481399] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:32.408 [2024-11-17 13:20:21.481411] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:32.408 [2024-11-17 13:20:21.483540] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:32.408 [2024-11-17 13:20:21.483580] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:32.408 BaseBdev1 00:10:32.408 13:20:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.408 13:20:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:32.408 13:20:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:32.408 13:20:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.408 13:20:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.408 BaseBdev2_malloc 00:10:32.408 13:20:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.408 13:20:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:32.408 13:20:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.408 13:20:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.408 true 00:10:32.408 13:20:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.408 13:20:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:32.408 13:20:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.408 13:20:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.408 [2024-11-17 13:20:21.548865] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:32.408 [2024-11-17 13:20:21.548923] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:32.408 [2024-11-17 13:20:21.548939] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:32.408 [2024-11-17 13:20:21.548949] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:32.408 [2024-11-17 13:20:21.551009] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:32.408 [2024-11-17 13:20:21.551051] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:32.408 BaseBdev2 00:10:32.408 13:20:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.408 13:20:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:32.408 13:20:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:32.408 13:20:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.408 13:20:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.408 BaseBdev3_malloc 00:10:32.408 13:20:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.408 13:20:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:32.408 13:20:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.409 13:20:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.409 true 00:10:32.409 13:20:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.409 13:20:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:32.409 13:20:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.409 13:20:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.409 [2024-11-17 13:20:21.625887] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:32.409 [2024-11-17 13:20:21.625939] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:32.409 [2024-11-17 13:20:21.625956] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:32.409 [2024-11-17 13:20:21.625966] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:32.409 [2024-11-17 13:20:21.628036] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:32.409 [2024-11-17 13:20:21.628145] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:32.668 BaseBdev3 00:10:32.668 13:20:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.668 13:20:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:32.668 13:20:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:10:32.668 13:20:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.668 13:20:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.668 BaseBdev4_malloc 00:10:32.668 13:20:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.668 13:20:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:10:32.668 13:20:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.668 13:20:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.668 true 00:10:32.668 13:20:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.668 13:20:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:10:32.668 13:20:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.668 13:20:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.668 [2024-11-17 13:20:21.693876] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:10:32.668 [2024-11-17 13:20:21.693996] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:32.668 [2024-11-17 13:20:21.694031] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:32.668 [2024-11-17 13:20:21.694062] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:32.668 [2024-11-17 13:20:21.696199] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:32.668 [2024-11-17 13:20:21.696288] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:10:32.668 BaseBdev4 00:10:32.668 13:20:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.668 13:20:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:10:32.668 13:20:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.668 13:20:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.668 [2024-11-17 13:20:21.705915] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:32.668 [2024-11-17 13:20:21.707731] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:32.668 [2024-11-17 13:20:21.707804] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:32.668 [2024-11-17 13:20:21.707868] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:32.668 [2024-11-17 13:20:21.708077] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:10:32.668 [2024-11-17 13:20:21.708095] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:32.668 [2024-11-17 13:20:21.708337] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:10:32.668 [2024-11-17 13:20:21.708505] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:10:32.668 [2024-11-17 13:20:21.708516] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:10:32.668 [2024-11-17 13:20:21.708667] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:32.668 13:20:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.668 13:20:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:32.668 13:20:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:32.669 13:20:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:32.669 13:20:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:32.669 13:20:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:32.669 13:20:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:32.669 13:20:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:32.669 13:20:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:32.669 13:20:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:32.669 13:20:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:32.669 13:20:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:32.669 13:20:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:32.669 13:20:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.669 13:20:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.669 13:20:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.669 13:20:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:32.669 "name": "raid_bdev1", 00:10:32.669 "uuid": "b8bbc4f8-7004-45a5-a78a-241043d0965d", 00:10:32.669 "strip_size_kb": 64, 00:10:32.669 "state": "online", 00:10:32.669 "raid_level": "raid0", 00:10:32.669 "superblock": true, 00:10:32.669 "num_base_bdevs": 4, 00:10:32.669 "num_base_bdevs_discovered": 4, 00:10:32.669 "num_base_bdevs_operational": 4, 00:10:32.669 "base_bdevs_list": [ 00:10:32.669 { 00:10:32.669 "name": "BaseBdev1", 00:10:32.669 "uuid": "dc54bc46-0d62-5abd-b621-f294b40a9767", 00:10:32.669 "is_configured": true, 00:10:32.669 "data_offset": 2048, 00:10:32.669 "data_size": 63488 00:10:32.669 }, 00:10:32.669 { 00:10:32.669 "name": "BaseBdev2", 00:10:32.669 "uuid": "4a929523-a7a3-5c1d-99f8-b1f5df9ee5e9", 00:10:32.669 "is_configured": true, 00:10:32.669 "data_offset": 2048, 00:10:32.669 "data_size": 63488 00:10:32.669 }, 00:10:32.669 { 00:10:32.669 "name": "BaseBdev3", 00:10:32.669 "uuid": "a319accd-d994-58d9-99d5-bcfe586fef33", 00:10:32.669 "is_configured": true, 00:10:32.669 "data_offset": 2048, 00:10:32.669 "data_size": 63488 00:10:32.669 }, 00:10:32.669 { 00:10:32.669 "name": "BaseBdev4", 00:10:32.669 "uuid": "0555ddf2-a5f0-5fe2-aed0-633cd22c8b2b", 00:10:32.669 "is_configured": true, 00:10:32.669 "data_offset": 2048, 00:10:32.669 "data_size": 63488 00:10:32.669 } 00:10:32.669 ] 00:10:32.669 }' 00:10:32.669 13:20:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:32.669 13:20:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.928 13:20:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:32.928 13:20:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:33.188 [2024-11-17 13:20:22.230406] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:10:34.127 13:20:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:10:34.127 13:20:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.127 13:20:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.127 13:20:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.127 13:20:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:34.127 13:20:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:10:34.127 13:20:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:10:34.127 13:20:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:34.127 13:20:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:34.127 13:20:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:34.127 13:20:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:34.127 13:20:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:34.127 13:20:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:34.127 13:20:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:34.127 13:20:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:34.127 13:20:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:34.127 13:20:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:34.127 13:20:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:34.127 13:20:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:34.127 13:20:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.127 13:20:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.127 13:20:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.127 13:20:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:34.127 "name": "raid_bdev1", 00:10:34.127 "uuid": "b8bbc4f8-7004-45a5-a78a-241043d0965d", 00:10:34.127 "strip_size_kb": 64, 00:10:34.127 "state": "online", 00:10:34.127 "raid_level": "raid0", 00:10:34.127 "superblock": true, 00:10:34.127 "num_base_bdevs": 4, 00:10:34.127 "num_base_bdevs_discovered": 4, 00:10:34.127 "num_base_bdevs_operational": 4, 00:10:34.127 "base_bdevs_list": [ 00:10:34.127 { 00:10:34.128 "name": "BaseBdev1", 00:10:34.128 "uuid": "dc54bc46-0d62-5abd-b621-f294b40a9767", 00:10:34.128 "is_configured": true, 00:10:34.128 "data_offset": 2048, 00:10:34.128 "data_size": 63488 00:10:34.128 }, 00:10:34.128 { 00:10:34.128 "name": "BaseBdev2", 00:10:34.128 "uuid": "4a929523-a7a3-5c1d-99f8-b1f5df9ee5e9", 00:10:34.128 "is_configured": true, 00:10:34.128 "data_offset": 2048, 00:10:34.128 "data_size": 63488 00:10:34.128 }, 00:10:34.128 { 00:10:34.128 "name": "BaseBdev3", 00:10:34.128 "uuid": "a319accd-d994-58d9-99d5-bcfe586fef33", 00:10:34.128 "is_configured": true, 00:10:34.128 "data_offset": 2048, 00:10:34.128 "data_size": 63488 00:10:34.128 }, 00:10:34.128 { 00:10:34.128 "name": "BaseBdev4", 00:10:34.128 "uuid": "0555ddf2-a5f0-5fe2-aed0-633cd22c8b2b", 00:10:34.128 "is_configured": true, 00:10:34.128 "data_offset": 2048, 00:10:34.128 "data_size": 63488 00:10:34.128 } 00:10:34.128 ] 00:10:34.128 }' 00:10:34.128 13:20:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:34.128 13:20:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.387 13:20:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:34.387 13:20:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.387 13:20:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.387 [2024-11-17 13:20:23.564529] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:34.387 [2024-11-17 13:20:23.564644] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:34.387 [2024-11-17 13:20:23.567295] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:34.387 [2024-11-17 13:20:23.567403] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:34.387 [2024-11-17 13:20:23.567482] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:34.387 [2024-11-17 13:20:23.567529] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:10:34.387 { 00:10:34.387 "results": [ 00:10:34.387 { 00:10:34.387 "job": "raid_bdev1", 00:10:34.387 "core_mask": "0x1", 00:10:34.387 "workload": "randrw", 00:10:34.387 "percentage": 50, 00:10:34.387 "status": "finished", 00:10:34.387 "queue_depth": 1, 00:10:34.387 "io_size": 131072, 00:10:34.387 "runtime": 1.334968, 00:10:34.387 "iops": 16007.125264425815, 00:10:34.387 "mibps": 2000.8906580532268, 00:10:34.387 "io_failed": 1, 00:10:34.387 "io_timeout": 0, 00:10:34.387 "avg_latency_us": 86.92965079806201, 00:10:34.387 "min_latency_us": 26.382532751091702, 00:10:34.387 "max_latency_us": 1373.6803493449781 00:10:34.387 } 00:10:34.387 ], 00:10:34.387 "core_count": 1 00:10:34.387 } 00:10:34.387 13:20:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.387 13:20:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 71049 00:10:34.387 13:20:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 71049 ']' 00:10:34.387 13:20:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 71049 00:10:34.387 13:20:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:10:34.387 13:20:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:34.387 13:20:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71049 00:10:34.387 13:20:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:34.387 13:20:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:34.387 13:20:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71049' 00:10:34.387 killing process with pid 71049 00:10:34.387 13:20:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 71049 00:10:34.647 [2024-11-17 13:20:23.611176] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:34.647 13:20:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 71049 00:10:34.906 [2024-11-17 13:20:23.931822] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:35.872 13:20:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.Fwf7O3GJVb 00:10:35.872 13:20:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:35.872 13:20:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:35.872 13:20:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.75 00:10:35.872 13:20:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:10:35.872 13:20:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:35.872 13:20:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:35.872 13:20:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.75 != \0\.\0\0 ]] 00:10:35.872 00:10:35.872 real 0m4.608s 00:10:35.872 user 0m5.364s 00:10:35.872 sys 0m0.606s 00:10:35.872 ************************************ 00:10:35.872 END TEST raid_write_error_test 00:10:35.872 ************************************ 00:10:35.872 13:20:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:35.872 13:20:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.143 13:20:25 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:10:36.143 13:20:25 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:10:36.143 13:20:25 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:36.143 13:20:25 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:36.143 13:20:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:36.143 ************************************ 00:10:36.143 START TEST raid_state_function_test 00:10:36.144 ************************************ 00:10:36.144 13:20:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 4 false 00:10:36.144 13:20:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:10:36.144 13:20:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:10:36.144 13:20:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:10:36.144 13:20:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:36.144 13:20:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:36.144 13:20:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:36.144 13:20:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:36.144 13:20:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:36.144 13:20:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:36.144 13:20:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:36.144 13:20:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:36.144 13:20:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:36.144 13:20:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:36.144 13:20:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:36.144 13:20:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:36.144 13:20:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:10:36.144 13:20:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:36.144 13:20:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:36.144 13:20:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:36.144 13:20:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:36.144 13:20:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:36.144 13:20:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:36.144 13:20:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:36.144 13:20:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:36.144 13:20:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:10:36.144 13:20:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:36.144 13:20:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:36.144 13:20:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:10:36.144 13:20:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:10:36.144 13:20:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=71193 00:10:36.144 13:20:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:36.144 13:20:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 71193' 00:10:36.144 Process raid pid: 71193 00:10:36.144 13:20:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 71193 00:10:36.144 13:20:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 71193 ']' 00:10:36.144 13:20:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:36.144 13:20:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:36.144 13:20:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:36.144 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:36.144 13:20:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:36.144 13:20:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.144 [2024-11-17 13:20:25.262500] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:10:36.144 [2024-11-17 13:20:25.262692] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:36.403 [2024-11-17 13:20:25.440573] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:36.403 [2024-11-17 13:20:25.557852] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:36.662 [2024-11-17 13:20:25.759535] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:36.662 [2024-11-17 13:20:25.759642] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:36.921 13:20:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:36.921 13:20:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:10:36.921 13:20:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:36.922 13:20:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.922 13:20:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.922 [2024-11-17 13:20:26.083577] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:36.922 [2024-11-17 13:20:26.083707] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:36.922 [2024-11-17 13:20:26.083738] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:36.922 [2024-11-17 13:20:26.083762] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:36.922 [2024-11-17 13:20:26.083786] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:36.922 [2024-11-17 13:20:26.083809] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:36.922 [2024-11-17 13:20:26.083828] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:36.922 [2024-11-17 13:20:26.083876] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:36.922 13:20:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.922 13:20:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:36.922 13:20:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:36.922 13:20:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:36.922 13:20:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:36.922 13:20:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:36.922 13:20:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:36.922 13:20:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:36.922 13:20:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:36.922 13:20:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:36.922 13:20:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:36.922 13:20:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:36.922 13:20:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:36.922 13:20:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.922 13:20:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.922 13:20:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.922 13:20:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:36.922 "name": "Existed_Raid", 00:10:36.922 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:36.922 "strip_size_kb": 64, 00:10:36.922 "state": "configuring", 00:10:36.922 "raid_level": "concat", 00:10:36.922 "superblock": false, 00:10:36.922 "num_base_bdevs": 4, 00:10:36.922 "num_base_bdevs_discovered": 0, 00:10:36.922 "num_base_bdevs_operational": 4, 00:10:36.922 "base_bdevs_list": [ 00:10:36.922 { 00:10:36.922 "name": "BaseBdev1", 00:10:36.922 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:36.922 "is_configured": false, 00:10:36.922 "data_offset": 0, 00:10:36.922 "data_size": 0 00:10:36.922 }, 00:10:36.922 { 00:10:36.922 "name": "BaseBdev2", 00:10:36.922 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:36.922 "is_configured": false, 00:10:36.922 "data_offset": 0, 00:10:36.922 "data_size": 0 00:10:36.922 }, 00:10:36.922 { 00:10:36.922 "name": "BaseBdev3", 00:10:36.922 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:36.922 "is_configured": false, 00:10:36.922 "data_offset": 0, 00:10:36.922 "data_size": 0 00:10:36.922 }, 00:10:36.922 { 00:10:36.922 "name": "BaseBdev4", 00:10:36.922 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:36.922 "is_configured": false, 00:10:36.922 "data_offset": 0, 00:10:36.922 "data_size": 0 00:10:36.922 } 00:10:36.922 ] 00:10:36.922 }' 00:10:36.922 13:20:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:36.922 13:20:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.491 13:20:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:37.491 13:20:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.491 13:20:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.491 [2024-11-17 13:20:26.518758] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:37.491 [2024-11-17 13:20:26.518868] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:37.491 13:20:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.491 13:20:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:37.491 13:20:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.491 13:20:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.491 [2024-11-17 13:20:26.530738] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:37.491 [2024-11-17 13:20:26.530781] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:37.491 [2024-11-17 13:20:26.530790] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:37.491 [2024-11-17 13:20:26.530799] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:37.491 [2024-11-17 13:20:26.530805] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:37.491 [2024-11-17 13:20:26.530814] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:37.491 [2024-11-17 13:20:26.530821] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:37.491 [2024-11-17 13:20:26.530828] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:37.491 13:20:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.491 13:20:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:37.491 13:20:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.491 13:20:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.491 [2024-11-17 13:20:26.577185] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:37.491 BaseBdev1 00:10:37.491 13:20:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.491 13:20:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:37.491 13:20:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:37.491 13:20:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:37.491 13:20:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:37.491 13:20:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:37.491 13:20:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:37.491 13:20:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:37.491 13:20:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.491 13:20:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.491 13:20:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.491 13:20:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:37.491 13:20:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.491 13:20:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.491 [ 00:10:37.491 { 00:10:37.491 "name": "BaseBdev1", 00:10:37.491 "aliases": [ 00:10:37.491 "6252a44f-2a37-4793-81bb-1617fe52e106" 00:10:37.491 ], 00:10:37.491 "product_name": "Malloc disk", 00:10:37.491 "block_size": 512, 00:10:37.491 "num_blocks": 65536, 00:10:37.491 "uuid": "6252a44f-2a37-4793-81bb-1617fe52e106", 00:10:37.491 "assigned_rate_limits": { 00:10:37.491 "rw_ios_per_sec": 0, 00:10:37.491 "rw_mbytes_per_sec": 0, 00:10:37.491 "r_mbytes_per_sec": 0, 00:10:37.491 "w_mbytes_per_sec": 0 00:10:37.491 }, 00:10:37.491 "claimed": true, 00:10:37.491 "claim_type": "exclusive_write", 00:10:37.491 "zoned": false, 00:10:37.491 "supported_io_types": { 00:10:37.491 "read": true, 00:10:37.491 "write": true, 00:10:37.491 "unmap": true, 00:10:37.491 "flush": true, 00:10:37.491 "reset": true, 00:10:37.491 "nvme_admin": false, 00:10:37.491 "nvme_io": false, 00:10:37.491 "nvme_io_md": false, 00:10:37.491 "write_zeroes": true, 00:10:37.491 "zcopy": true, 00:10:37.491 "get_zone_info": false, 00:10:37.491 "zone_management": false, 00:10:37.491 "zone_append": false, 00:10:37.491 "compare": false, 00:10:37.491 "compare_and_write": false, 00:10:37.491 "abort": true, 00:10:37.491 "seek_hole": false, 00:10:37.491 "seek_data": false, 00:10:37.491 "copy": true, 00:10:37.491 "nvme_iov_md": false 00:10:37.491 }, 00:10:37.491 "memory_domains": [ 00:10:37.491 { 00:10:37.491 "dma_device_id": "system", 00:10:37.491 "dma_device_type": 1 00:10:37.491 }, 00:10:37.491 { 00:10:37.491 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:37.491 "dma_device_type": 2 00:10:37.491 } 00:10:37.491 ], 00:10:37.491 "driver_specific": {} 00:10:37.491 } 00:10:37.491 ] 00:10:37.491 13:20:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.491 13:20:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:37.491 13:20:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:37.491 13:20:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:37.491 13:20:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:37.491 13:20:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:37.491 13:20:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:37.491 13:20:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:37.491 13:20:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:37.491 13:20:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:37.491 13:20:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:37.491 13:20:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:37.491 13:20:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:37.491 13:20:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:37.491 13:20:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.491 13:20:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.491 13:20:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.491 13:20:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:37.491 "name": "Existed_Raid", 00:10:37.491 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:37.491 "strip_size_kb": 64, 00:10:37.491 "state": "configuring", 00:10:37.491 "raid_level": "concat", 00:10:37.491 "superblock": false, 00:10:37.491 "num_base_bdevs": 4, 00:10:37.491 "num_base_bdevs_discovered": 1, 00:10:37.491 "num_base_bdevs_operational": 4, 00:10:37.491 "base_bdevs_list": [ 00:10:37.491 { 00:10:37.491 "name": "BaseBdev1", 00:10:37.491 "uuid": "6252a44f-2a37-4793-81bb-1617fe52e106", 00:10:37.491 "is_configured": true, 00:10:37.491 "data_offset": 0, 00:10:37.491 "data_size": 65536 00:10:37.491 }, 00:10:37.491 { 00:10:37.491 "name": "BaseBdev2", 00:10:37.491 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:37.491 "is_configured": false, 00:10:37.491 "data_offset": 0, 00:10:37.491 "data_size": 0 00:10:37.491 }, 00:10:37.491 { 00:10:37.491 "name": "BaseBdev3", 00:10:37.491 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:37.491 "is_configured": false, 00:10:37.491 "data_offset": 0, 00:10:37.491 "data_size": 0 00:10:37.491 }, 00:10:37.491 { 00:10:37.491 "name": "BaseBdev4", 00:10:37.491 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:37.491 "is_configured": false, 00:10:37.491 "data_offset": 0, 00:10:37.491 "data_size": 0 00:10:37.491 } 00:10:37.491 ] 00:10:37.491 }' 00:10:37.491 13:20:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:37.491 13:20:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.059 13:20:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:38.059 13:20:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.059 13:20:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.059 [2024-11-17 13:20:27.072364] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:38.059 [2024-11-17 13:20:27.072459] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:38.059 13:20:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.059 13:20:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:38.059 13:20:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.059 13:20:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.059 [2024-11-17 13:20:27.084396] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:38.059 [2024-11-17 13:20:27.086204] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:38.059 [2024-11-17 13:20:27.086321] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:38.059 [2024-11-17 13:20:27.086364] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:38.059 [2024-11-17 13:20:27.086378] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:38.059 [2024-11-17 13:20:27.086386] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:38.059 [2024-11-17 13:20:27.086396] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:38.059 13:20:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.059 13:20:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:38.059 13:20:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:38.059 13:20:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:38.059 13:20:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:38.059 13:20:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:38.059 13:20:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:38.059 13:20:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:38.059 13:20:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:38.059 13:20:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:38.059 13:20:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:38.059 13:20:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:38.059 13:20:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:38.059 13:20:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.059 13:20:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.059 13:20:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.059 13:20:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:38.059 13:20:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.059 13:20:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:38.059 "name": "Existed_Raid", 00:10:38.059 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:38.059 "strip_size_kb": 64, 00:10:38.059 "state": "configuring", 00:10:38.059 "raid_level": "concat", 00:10:38.059 "superblock": false, 00:10:38.059 "num_base_bdevs": 4, 00:10:38.059 "num_base_bdevs_discovered": 1, 00:10:38.059 "num_base_bdevs_operational": 4, 00:10:38.059 "base_bdevs_list": [ 00:10:38.059 { 00:10:38.059 "name": "BaseBdev1", 00:10:38.059 "uuid": "6252a44f-2a37-4793-81bb-1617fe52e106", 00:10:38.059 "is_configured": true, 00:10:38.059 "data_offset": 0, 00:10:38.059 "data_size": 65536 00:10:38.059 }, 00:10:38.059 { 00:10:38.059 "name": "BaseBdev2", 00:10:38.059 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:38.059 "is_configured": false, 00:10:38.059 "data_offset": 0, 00:10:38.059 "data_size": 0 00:10:38.059 }, 00:10:38.059 { 00:10:38.059 "name": "BaseBdev3", 00:10:38.059 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:38.059 "is_configured": false, 00:10:38.059 "data_offset": 0, 00:10:38.059 "data_size": 0 00:10:38.059 }, 00:10:38.059 { 00:10:38.059 "name": "BaseBdev4", 00:10:38.059 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:38.059 "is_configured": false, 00:10:38.059 "data_offset": 0, 00:10:38.059 "data_size": 0 00:10:38.059 } 00:10:38.059 ] 00:10:38.059 }' 00:10:38.059 13:20:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:38.059 13:20:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.318 13:20:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:38.318 13:20:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.318 13:20:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.577 [2024-11-17 13:20:27.581801] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:38.577 BaseBdev2 00:10:38.577 13:20:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.577 13:20:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:38.577 13:20:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:38.577 13:20:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:38.577 13:20:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:38.577 13:20:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:38.577 13:20:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:38.577 13:20:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:38.577 13:20:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.577 13:20:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.577 13:20:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.577 13:20:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:38.577 13:20:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.577 13:20:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.577 [ 00:10:38.577 { 00:10:38.577 "name": "BaseBdev2", 00:10:38.577 "aliases": [ 00:10:38.577 "73132a93-53a7-4310-9b9b-25a4bc1069e6" 00:10:38.577 ], 00:10:38.577 "product_name": "Malloc disk", 00:10:38.577 "block_size": 512, 00:10:38.577 "num_blocks": 65536, 00:10:38.577 "uuid": "73132a93-53a7-4310-9b9b-25a4bc1069e6", 00:10:38.577 "assigned_rate_limits": { 00:10:38.577 "rw_ios_per_sec": 0, 00:10:38.577 "rw_mbytes_per_sec": 0, 00:10:38.577 "r_mbytes_per_sec": 0, 00:10:38.577 "w_mbytes_per_sec": 0 00:10:38.577 }, 00:10:38.577 "claimed": true, 00:10:38.577 "claim_type": "exclusive_write", 00:10:38.577 "zoned": false, 00:10:38.577 "supported_io_types": { 00:10:38.577 "read": true, 00:10:38.577 "write": true, 00:10:38.577 "unmap": true, 00:10:38.577 "flush": true, 00:10:38.577 "reset": true, 00:10:38.577 "nvme_admin": false, 00:10:38.577 "nvme_io": false, 00:10:38.577 "nvme_io_md": false, 00:10:38.577 "write_zeroes": true, 00:10:38.577 "zcopy": true, 00:10:38.577 "get_zone_info": false, 00:10:38.577 "zone_management": false, 00:10:38.577 "zone_append": false, 00:10:38.577 "compare": false, 00:10:38.577 "compare_and_write": false, 00:10:38.577 "abort": true, 00:10:38.577 "seek_hole": false, 00:10:38.577 "seek_data": false, 00:10:38.577 "copy": true, 00:10:38.577 "nvme_iov_md": false 00:10:38.577 }, 00:10:38.577 "memory_domains": [ 00:10:38.577 { 00:10:38.577 "dma_device_id": "system", 00:10:38.577 "dma_device_type": 1 00:10:38.577 }, 00:10:38.577 { 00:10:38.577 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:38.577 "dma_device_type": 2 00:10:38.577 } 00:10:38.577 ], 00:10:38.577 "driver_specific": {} 00:10:38.577 } 00:10:38.577 ] 00:10:38.577 13:20:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.577 13:20:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:38.577 13:20:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:38.577 13:20:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:38.577 13:20:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:38.577 13:20:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:38.577 13:20:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:38.577 13:20:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:38.577 13:20:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:38.577 13:20:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:38.577 13:20:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:38.577 13:20:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:38.577 13:20:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:38.577 13:20:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:38.577 13:20:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.577 13:20:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.577 13:20:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.577 13:20:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:38.577 13:20:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.578 13:20:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:38.578 "name": "Existed_Raid", 00:10:38.578 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:38.578 "strip_size_kb": 64, 00:10:38.578 "state": "configuring", 00:10:38.578 "raid_level": "concat", 00:10:38.578 "superblock": false, 00:10:38.578 "num_base_bdevs": 4, 00:10:38.578 "num_base_bdevs_discovered": 2, 00:10:38.578 "num_base_bdevs_operational": 4, 00:10:38.578 "base_bdevs_list": [ 00:10:38.578 { 00:10:38.578 "name": "BaseBdev1", 00:10:38.578 "uuid": "6252a44f-2a37-4793-81bb-1617fe52e106", 00:10:38.578 "is_configured": true, 00:10:38.578 "data_offset": 0, 00:10:38.578 "data_size": 65536 00:10:38.578 }, 00:10:38.578 { 00:10:38.578 "name": "BaseBdev2", 00:10:38.578 "uuid": "73132a93-53a7-4310-9b9b-25a4bc1069e6", 00:10:38.578 "is_configured": true, 00:10:38.578 "data_offset": 0, 00:10:38.578 "data_size": 65536 00:10:38.578 }, 00:10:38.578 { 00:10:38.578 "name": "BaseBdev3", 00:10:38.578 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:38.578 "is_configured": false, 00:10:38.578 "data_offset": 0, 00:10:38.578 "data_size": 0 00:10:38.578 }, 00:10:38.578 { 00:10:38.578 "name": "BaseBdev4", 00:10:38.578 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:38.578 "is_configured": false, 00:10:38.578 "data_offset": 0, 00:10:38.578 "data_size": 0 00:10:38.578 } 00:10:38.578 ] 00:10:38.578 }' 00:10:38.578 13:20:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:38.578 13:20:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.147 13:20:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:39.147 13:20:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.147 13:20:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.147 [2024-11-17 13:20:28.120679] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:39.147 BaseBdev3 00:10:39.147 13:20:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.147 13:20:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:39.147 13:20:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:39.147 13:20:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:39.147 13:20:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:39.147 13:20:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:39.147 13:20:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:39.147 13:20:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:39.147 13:20:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.147 13:20:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.147 13:20:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.147 13:20:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:39.147 13:20:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.147 13:20:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.147 [ 00:10:39.147 { 00:10:39.147 "name": "BaseBdev3", 00:10:39.147 "aliases": [ 00:10:39.147 "982a580b-a608-4962-940e-9b1b7bb684ff" 00:10:39.147 ], 00:10:39.147 "product_name": "Malloc disk", 00:10:39.147 "block_size": 512, 00:10:39.147 "num_blocks": 65536, 00:10:39.147 "uuid": "982a580b-a608-4962-940e-9b1b7bb684ff", 00:10:39.147 "assigned_rate_limits": { 00:10:39.147 "rw_ios_per_sec": 0, 00:10:39.147 "rw_mbytes_per_sec": 0, 00:10:39.147 "r_mbytes_per_sec": 0, 00:10:39.147 "w_mbytes_per_sec": 0 00:10:39.147 }, 00:10:39.147 "claimed": true, 00:10:39.147 "claim_type": "exclusive_write", 00:10:39.147 "zoned": false, 00:10:39.147 "supported_io_types": { 00:10:39.147 "read": true, 00:10:39.147 "write": true, 00:10:39.147 "unmap": true, 00:10:39.147 "flush": true, 00:10:39.147 "reset": true, 00:10:39.147 "nvme_admin": false, 00:10:39.147 "nvme_io": false, 00:10:39.147 "nvme_io_md": false, 00:10:39.147 "write_zeroes": true, 00:10:39.147 "zcopy": true, 00:10:39.147 "get_zone_info": false, 00:10:39.147 "zone_management": false, 00:10:39.147 "zone_append": false, 00:10:39.147 "compare": false, 00:10:39.147 "compare_and_write": false, 00:10:39.147 "abort": true, 00:10:39.147 "seek_hole": false, 00:10:39.147 "seek_data": false, 00:10:39.147 "copy": true, 00:10:39.147 "nvme_iov_md": false 00:10:39.147 }, 00:10:39.147 "memory_domains": [ 00:10:39.147 { 00:10:39.147 "dma_device_id": "system", 00:10:39.147 "dma_device_type": 1 00:10:39.147 }, 00:10:39.147 { 00:10:39.147 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:39.147 "dma_device_type": 2 00:10:39.147 } 00:10:39.147 ], 00:10:39.147 "driver_specific": {} 00:10:39.147 } 00:10:39.147 ] 00:10:39.147 13:20:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.147 13:20:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:39.147 13:20:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:39.147 13:20:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:39.147 13:20:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:39.147 13:20:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:39.147 13:20:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:39.147 13:20:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:39.147 13:20:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:39.147 13:20:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:39.147 13:20:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:39.147 13:20:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:39.147 13:20:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:39.147 13:20:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:39.147 13:20:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.147 13:20:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.147 13:20:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:39.147 13:20:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.147 13:20:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.147 13:20:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:39.147 "name": "Existed_Raid", 00:10:39.147 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:39.147 "strip_size_kb": 64, 00:10:39.147 "state": "configuring", 00:10:39.147 "raid_level": "concat", 00:10:39.147 "superblock": false, 00:10:39.147 "num_base_bdevs": 4, 00:10:39.147 "num_base_bdevs_discovered": 3, 00:10:39.147 "num_base_bdevs_operational": 4, 00:10:39.147 "base_bdevs_list": [ 00:10:39.147 { 00:10:39.147 "name": "BaseBdev1", 00:10:39.147 "uuid": "6252a44f-2a37-4793-81bb-1617fe52e106", 00:10:39.147 "is_configured": true, 00:10:39.147 "data_offset": 0, 00:10:39.147 "data_size": 65536 00:10:39.147 }, 00:10:39.147 { 00:10:39.147 "name": "BaseBdev2", 00:10:39.147 "uuid": "73132a93-53a7-4310-9b9b-25a4bc1069e6", 00:10:39.147 "is_configured": true, 00:10:39.147 "data_offset": 0, 00:10:39.147 "data_size": 65536 00:10:39.147 }, 00:10:39.147 { 00:10:39.147 "name": "BaseBdev3", 00:10:39.147 "uuid": "982a580b-a608-4962-940e-9b1b7bb684ff", 00:10:39.147 "is_configured": true, 00:10:39.147 "data_offset": 0, 00:10:39.147 "data_size": 65536 00:10:39.147 }, 00:10:39.147 { 00:10:39.147 "name": "BaseBdev4", 00:10:39.147 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:39.147 "is_configured": false, 00:10:39.147 "data_offset": 0, 00:10:39.147 "data_size": 0 00:10:39.147 } 00:10:39.147 ] 00:10:39.147 }' 00:10:39.147 13:20:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:39.147 13:20:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.407 13:20:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:39.407 13:20:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.407 13:20:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.666 [2024-11-17 13:20:28.639001] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:39.666 [2024-11-17 13:20:28.639050] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:39.666 [2024-11-17 13:20:28.639059] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:10:39.666 [2024-11-17 13:20:28.639349] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:39.666 [2024-11-17 13:20:28.639510] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:39.666 [2024-11-17 13:20:28.639524] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:39.666 [2024-11-17 13:20:28.639826] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:39.666 BaseBdev4 00:10:39.666 13:20:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.666 13:20:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:39.666 13:20:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:39.666 13:20:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:39.666 13:20:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:39.666 13:20:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:39.666 13:20:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:39.666 13:20:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:39.666 13:20:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.666 13:20:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.666 13:20:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.666 13:20:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:39.666 13:20:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.666 13:20:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.666 [ 00:10:39.666 { 00:10:39.666 "name": "BaseBdev4", 00:10:39.666 "aliases": [ 00:10:39.666 "ef777b4f-a6e0-424e-830a-7cf2e53d7515" 00:10:39.666 ], 00:10:39.666 "product_name": "Malloc disk", 00:10:39.666 "block_size": 512, 00:10:39.666 "num_blocks": 65536, 00:10:39.666 "uuid": "ef777b4f-a6e0-424e-830a-7cf2e53d7515", 00:10:39.666 "assigned_rate_limits": { 00:10:39.666 "rw_ios_per_sec": 0, 00:10:39.666 "rw_mbytes_per_sec": 0, 00:10:39.666 "r_mbytes_per_sec": 0, 00:10:39.666 "w_mbytes_per_sec": 0 00:10:39.666 }, 00:10:39.666 "claimed": true, 00:10:39.666 "claim_type": "exclusive_write", 00:10:39.666 "zoned": false, 00:10:39.666 "supported_io_types": { 00:10:39.666 "read": true, 00:10:39.666 "write": true, 00:10:39.666 "unmap": true, 00:10:39.666 "flush": true, 00:10:39.666 "reset": true, 00:10:39.666 "nvme_admin": false, 00:10:39.666 "nvme_io": false, 00:10:39.666 "nvme_io_md": false, 00:10:39.666 "write_zeroes": true, 00:10:39.666 "zcopy": true, 00:10:39.666 "get_zone_info": false, 00:10:39.666 "zone_management": false, 00:10:39.666 "zone_append": false, 00:10:39.666 "compare": false, 00:10:39.666 "compare_and_write": false, 00:10:39.666 "abort": true, 00:10:39.666 "seek_hole": false, 00:10:39.666 "seek_data": false, 00:10:39.666 "copy": true, 00:10:39.666 "nvme_iov_md": false 00:10:39.666 }, 00:10:39.666 "memory_domains": [ 00:10:39.666 { 00:10:39.666 "dma_device_id": "system", 00:10:39.666 "dma_device_type": 1 00:10:39.666 }, 00:10:39.666 { 00:10:39.666 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:39.666 "dma_device_type": 2 00:10:39.666 } 00:10:39.666 ], 00:10:39.666 "driver_specific": {} 00:10:39.666 } 00:10:39.666 ] 00:10:39.666 13:20:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.666 13:20:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:39.666 13:20:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:39.666 13:20:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:39.666 13:20:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:10:39.666 13:20:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:39.666 13:20:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:39.666 13:20:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:39.666 13:20:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:39.666 13:20:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:39.666 13:20:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:39.666 13:20:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:39.666 13:20:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:39.666 13:20:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:39.666 13:20:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.666 13:20:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:39.666 13:20:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.666 13:20:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.666 13:20:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.666 13:20:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:39.666 "name": "Existed_Raid", 00:10:39.666 "uuid": "850d5e26-1910-4537-9761-a70663f2d0ac", 00:10:39.666 "strip_size_kb": 64, 00:10:39.666 "state": "online", 00:10:39.667 "raid_level": "concat", 00:10:39.667 "superblock": false, 00:10:39.667 "num_base_bdevs": 4, 00:10:39.667 "num_base_bdevs_discovered": 4, 00:10:39.667 "num_base_bdevs_operational": 4, 00:10:39.667 "base_bdevs_list": [ 00:10:39.667 { 00:10:39.667 "name": "BaseBdev1", 00:10:39.667 "uuid": "6252a44f-2a37-4793-81bb-1617fe52e106", 00:10:39.667 "is_configured": true, 00:10:39.667 "data_offset": 0, 00:10:39.667 "data_size": 65536 00:10:39.667 }, 00:10:39.667 { 00:10:39.667 "name": "BaseBdev2", 00:10:39.667 "uuid": "73132a93-53a7-4310-9b9b-25a4bc1069e6", 00:10:39.667 "is_configured": true, 00:10:39.667 "data_offset": 0, 00:10:39.667 "data_size": 65536 00:10:39.667 }, 00:10:39.667 { 00:10:39.667 "name": "BaseBdev3", 00:10:39.667 "uuid": "982a580b-a608-4962-940e-9b1b7bb684ff", 00:10:39.667 "is_configured": true, 00:10:39.667 "data_offset": 0, 00:10:39.667 "data_size": 65536 00:10:39.667 }, 00:10:39.667 { 00:10:39.667 "name": "BaseBdev4", 00:10:39.667 "uuid": "ef777b4f-a6e0-424e-830a-7cf2e53d7515", 00:10:39.667 "is_configured": true, 00:10:39.667 "data_offset": 0, 00:10:39.667 "data_size": 65536 00:10:39.667 } 00:10:39.667 ] 00:10:39.667 }' 00:10:39.667 13:20:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:39.667 13:20:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.926 13:20:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:39.926 13:20:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:39.926 13:20:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:39.926 13:20:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:39.926 13:20:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:39.926 13:20:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:39.926 13:20:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:39.926 13:20:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.926 13:20:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.926 13:20:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:39.926 [2024-11-17 13:20:29.130587] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:39.926 13:20:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.185 13:20:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:40.185 "name": "Existed_Raid", 00:10:40.185 "aliases": [ 00:10:40.185 "850d5e26-1910-4537-9761-a70663f2d0ac" 00:10:40.185 ], 00:10:40.185 "product_name": "Raid Volume", 00:10:40.185 "block_size": 512, 00:10:40.185 "num_blocks": 262144, 00:10:40.185 "uuid": "850d5e26-1910-4537-9761-a70663f2d0ac", 00:10:40.185 "assigned_rate_limits": { 00:10:40.185 "rw_ios_per_sec": 0, 00:10:40.185 "rw_mbytes_per_sec": 0, 00:10:40.185 "r_mbytes_per_sec": 0, 00:10:40.185 "w_mbytes_per_sec": 0 00:10:40.185 }, 00:10:40.185 "claimed": false, 00:10:40.185 "zoned": false, 00:10:40.185 "supported_io_types": { 00:10:40.185 "read": true, 00:10:40.185 "write": true, 00:10:40.185 "unmap": true, 00:10:40.185 "flush": true, 00:10:40.185 "reset": true, 00:10:40.185 "nvme_admin": false, 00:10:40.185 "nvme_io": false, 00:10:40.185 "nvme_io_md": false, 00:10:40.185 "write_zeroes": true, 00:10:40.185 "zcopy": false, 00:10:40.185 "get_zone_info": false, 00:10:40.185 "zone_management": false, 00:10:40.185 "zone_append": false, 00:10:40.185 "compare": false, 00:10:40.185 "compare_and_write": false, 00:10:40.185 "abort": false, 00:10:40.185 "seek_hole": false, 00:10:40.185 "seek_data": false, 00:10:40.185 "copy": false, 00:10:40.185 "nvme_iov_md": false 00:10:40.185 }, 00:10:40.185 "memory_domains": [ 00:10:40.185 { 00:10:40.185 "dma_device_id": "system", 00:10:40.185 "dma_device_type": 1 00:10:40.185 }, 00:10:40.185 { 00:10:40.185 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:40.185 "dma_device_type": 2 00:10:40.185 }, 00:10:40.185 { 00:10:40.185 "dma_device_id": "system", 00:10:40.185 "dma_device_type": 1 00:10:40.185 }, 00:10:40.185 { 00:10:40.185 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:40.185 "dma_device_type": 2 00:10:40.185 }, 00:10:40.185 { 00:10:40.185 "dma_device_id": "system", 00:10:40.185 "dma_device_type": 1 00:10:40.185 }, 00:10:40.185 { 00:10:40.185 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:40.185 "dma_device_type": 2 00:10:40.185 }, 00:10:40.185 { 00:10:40.185 "dma_device_id": "system", 00:10:40.185 "dma_device_type": 1 00:10:40.185 }, 00:10:40.185 { 00:10:40.185 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:40.185 "dma_device_type": 2 00:10:40.185 } 00:10:40.185 ], 00:10:40.185 "driver_specific": { 00:10:40.185 "raid": { 00:10:40.185 "uuid": "850d5e26-1910-4537-9761-a70663f2d0ac", 00:10:40.185 "strip_size_kb": 64, 00:10:40.185 "state": "online", 00:10:40.185 "raid_level": "concat", 00:10:40.185 "superblock": false, 00:10:40.185 "num_base_bdevs": 4, 00:10:40.185 "num_base_bdevs_discovered": 4, 00:10:40.185 "num_base_bdevs_operational": 4, 00:10:40.185 "base_bdevs_list": [ 00:10:40.185 { 00:10:40.185 "name": "BaseBdev1", 00:10:40.185 "uuid": "6252a44f-2a37-4793-81bb-1617fe52e106", 00:10:40.185 "is_configured": true, 00:10:40.185 "data_offset": 0, 00:10:40.185 "data_size": 65536 00:10:40.185 }, 00:10:40.185 { 00:10:40.185 "name": "BaseBdev2", 00:10:40.185 "uuid": "73132a93-53a7-4310-9b9b-25a4bc1069e6", 00:10:40.185 "is_configured": true, 00:10:40.185 "data_offset": 0, 00:10:40.185 "data_size": 65536 00:10:40.185 }, 00:10:40.185 { 00:10:40.185 "name": "BaseBdev3", 00:10:40.185 "uuid": "982a580b-a608-4962-940e-9b1b7bb684ff", 00:10:40.185 "is_configured": true, 00:10:40.185 "data_offset": 0, 00:10:40.185 "data_size": 65536 00:10:40.185 }, 00:10:40.185 { 00:10:40.185 "name": "BaseBdev4", 00:10:40.185 "uuid": "ef777b4f-a6e0-424e-830a-7cf2e53d7515", 00:10:40.185 "is_configured": true, 00:10:40.185 "data_offset": 0, 00:10:40.185 "data_size": 65536 00:10:40.185 } 00:10:40.185 ] 00:10:40.185 } 00:10:40.185 } 00:10:40.185 }' 00:10:40.185 13:20:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:40.185 13:20:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:40.185 BaseBdev2 00:10:40.185 BaseBdev3 00:10:40.185 BaseBdev4' 00:10:40.185 13:20:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:40.185 13:20:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:40.185 13:20:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:40.185 13:20:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:40.185 13:20:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:40.185 13:20:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.185 13:20:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.185 13:20:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.185 13:20:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:40.186 13:20:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:40.186 13:20:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:40.186 13:20:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:40.186 13:20:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:40.186 13:20:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.186 13:20:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.186 13:20:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.186 13:20:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:40.186 13:20:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:40.186 13:20:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:40.186 13:20:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:40.186 13:20:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:40.186 13:20:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.186 13:20:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.186 13:20:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.186 13:20:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:40.186 13:20:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:40.186 13:20:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:40.186 13:20:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:40.186 13:20:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:40.186 13:20:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.186 13:20:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.444 13:20:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.444 13:20:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:40.444 13:20:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:40.444 13:20:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:40.444 13:20:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.444 13:20:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.444 [2024-11-17 13:20:29.445766] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:40.444 [2024-11-17 13:20:29.445845] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:40.444 [2024-11-17 13:20:29.445919] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:40.444 13:20:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.444 13:20:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:40.444 13:20:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:10:40.444 13:20:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:40.444 13:20:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:40.444 13:20:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:40.444 13:20:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:10:40.444 13:20:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:40.444 13:20:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:40.444 13:20:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:40.444 13:20:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:40.444 13:20:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:40.444 13:20:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:40.444 13:20:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:40.444 13:20:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:40.444 13:20:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:40.444 13:20:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:40.445 13:20:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.445 13:20:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.445 13:20:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.445 13:20:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.445 13:20:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:40.445 "name": "Existed_Raid", 00:10:40.445 "uuid": "850d5e26-1910-4537-9761-a70663f2d0ac", 00:10:40.445 "strip_size_kb": 64, 00:10:40.445 "state": "offline", 00:10:40.445 "raid_level": "concat", 00:10:40.445 "superblock": false, 00:10:40.445 "num_base_bdevs": 4, 00:10:40.445 "num_base_bdevs_discovered": 3, 00:10:40.445 "num_base_bdevs_operational": 3, 00:10:40.445 "base_bdevs_list": [ 00:10:40.445 { 00:10:40.445 "name": null, 00:10:40.445 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:40.445 "is_configured": false, 00:10:40.445 "data_offset": 0, 00:10:40.445 "data_size": 65536 00:10:40.445 }, 00:10:40.445 { 00:10:40.445 "name": "BaseBdev2", 00:10:40.445 "uuid": "73132a93-53a7-4310-9b9b-25a4bc1069e6", 00:10:40.445 "is_configured": true, 00:10:40.445 "data_offset": 0, 00:10:40.445 "data_size": 65536 00:10:40.445 }, 00:10:40.445 { 00:10:40.445 "name": "BaseBdev3", 00:10:40.445 "uuid": "982a580b-a608-4962-940e-9b1b7bb684ff", 00:10:40.445 "is_configured": true, 00:10:40.445 "data_offset": 0, 00:10:40.445 "data_size": 65536 00:10:40.445 }, 00:10:40.445 { 00:10:40.445 "name": "BaseBdev4", 00:10:40.445 "uuid": "ef777b4f-a6e0-424e-830a-7cf2e53d7515", 00:10:40.445 "is_configured": true, 00:10:40.445 "data_offset": 0, 00:10:40.445 "data_size": 65536 00:10:40.445 } 00:10:40.445 ] 00:10:40.445 }' 00:10:40.445 13:20:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:40.445 13:20:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.013 13:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:41.013 13:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:41.013 13:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:41.013 13:20:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.013 13:20:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.013 13:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:41.013 13:20:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.013 13:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:41.013 13:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:41.013 13:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:41.013 13:20:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.013 13:20:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.013 [2024-11-17 13:20:30.072515] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:41.013 13:20:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.013 13:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:41.013 13:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:41.013 13:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:41.013 13:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:41.013 13:20:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.013 13:20:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.013 13:20:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.013 13:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:41.013 13:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:41.013 13:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:41.013 13:20:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.013 13:20:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.013 [2024-11-17 13:20:30.222360] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:41.272 13:20:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.272 13:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:41.272 13:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:41.272 13:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:41.272 13:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:41.272 13:20:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.272 13:20:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.272 13:20:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.272 13:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:41.272 13:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:41.272 13:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:41.272 13:20:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.272 13:20:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.272 [2024-11-17 13:20:30.376248] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:41.272 [2024-11-17 13:20:30.376298] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:41.272 13:20:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.272 13:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:41.273 13:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:41.273 13:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:41.273 13:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:41.273 13:20:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.273 13:20:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.273 13:20:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.533 13:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:41.533 13:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:41.533 13:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:41.533 13:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:41.533 13:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:41.533 13:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:41.533 13:20:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.533 13:20:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.533 BaseBdev2 00:10:41.533 13:20:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.533 13:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:41.533 13:20:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:41.533 13:20:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:41.533 13:20:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:41.533 13:20:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:41.533 13:20:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:41.533 13:20:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:41.533 13:20:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.533 13:20:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.533 13:20:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.533 13:20:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:41.533 13:20:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.533 13:20:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.533 [ 00:10:41.533 { 00:10:41.533 "name": "BaseBdev2", 00:10:41.533 "aliases": [ 00:10:41.533 "bf6c46de-a00e-47a8-9f3f-37ec49b0858d" 00:10:41.533 ], 00:10:41.533 "product_name": "Malloc disk", 00:10:41.533 "block_size": 512, 00:10:41.533 "num_blocks": 65536, 00:10:41.533 "uuid": "bf6c46de-a00e-47a8-9f3f-37ec49b0858d", 00:10:41.533 "assigned_rate_limits": { 00:10:41.533 "rw_ios_per_sec": 0, 00:10:41.533 "rw_mbytes_per_sec": 0, 00:10:41.533 "r_mbytes_per_sec": 0, 00:10:41.533 "w_mbytes_per_sec": 0 00:10:41.533 }, 00:10:41.533 "claimed": false, 00:10:41.533 "zoned": false, 00:10:41.533 "supported_io_types": { 00:10:41.533 "read": true, 00:10:41.533 "write": true, 00:10:41.533 "unmap": true, 00:10:41.533 "flush": true, 00:10:41.533 "reset": true, 00:10:41.533 "nvme_admin": false, 00:10:41.533 "nvme_io": false, 00:10:41.533 "nvme_io_md": false, 00:10:41.533 "write_zeroes": true, 00:10:41.533 "zcopy": true, 00:10:41.533 "get_zone_info": false, 00:10:41.533 "zone_management": false, 00:10:41.533 "zone_append": false, 00:10:41.533 "compare": false, 00:10:41.533 "compare_and_write": false, 00:10:41.533 "abort": true, 00:10:41.533 "seek_hole": false, 00:10:41.533 "seek_data": false, 00:10:41.533 "copy": true, 00:10:41.533 "nvme_iov_md": false 00:10:41.533 }, 00:10:41.533 "memory_domains": [ 00:10:41.533 { 00:10:41.533 "dma_device_id": "system", 00:10:41.533 "dma_device_type": 1 00:10:41.533 }, 00:10:41.533 { 00:10:41.533 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:41.533 "dma_device_type": 2 00:10:41.533 } 00:10:41.533 ], 00:10:41.533 "driver_specific": {} 00:10:41.533 } 00:10:41.533 ] 00:10:41.533 13:20:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.533 13:20:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:41.533 13:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:41.533 13:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:41.533 13:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:41.533 13:20:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.533 13:20:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.533 BaseBdev3 00:10:41.533 13:20:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.533 13:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:41.533 13:20:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:41.533 13:20:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:41.533 13:20:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:41.533 13:20:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:41.533 13:20:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:41.533 13:20:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:41.533 13:20:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.533 13:20:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.533 13:20:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.533 13:20:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:41.533 13:20:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.533 13:20:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.533 [ 00:10:41.533 { 00:10:41.533 "name": "BaseBdev3", 00:10:41.533 "aliases": [ 00:10:41.533 "ad80a461-12b4-4e8f-8890-17ac80c0eae3" 00:10:41.533 ], 00:10:41.533 "product_name": "Malloc disk", 00:10:41.533 "block_size": 512, 00:10:41.533 "num_blocks": 65536, 00:10:41.533 "uuid": "ad80a461-12b4-4e8f-8890-17ac80c0eae3", 00:10:41.533 "assigned_rate_limits": { 00:10:41.533 "rw_ios_per_sec": 0, 00:10:41.533 "rw_mbytes_per_sec": 0, 00:10:41.533 "r_mbytes_per_sec": 0, 00:10:41.533 "w_mbytes_per_sec": 0 00:10:41.533 }, 00:10:41.533 "claimed": false, 00:10:41.533 "zoned": false, 00:10:41.533 "supported_io_types": { 00:10:41.533 "read": true, 00:10:41.533 "write": true, 00:10:41.533 "unmap": true, 00:10:41.533 "flush": true, 00:10:41.533 "reset": true, 00:10:41.533 "nvme_admin": false, 00:10:41.533 "nvme_io": false, 00:10:41.533 "nvme_io_md": false, 00:10:41.533 "write_zeroes": true, 00:10:41.533 "zcopy": true, 00:10:41.533 "get_zone_info": false, 00:10:41.533 "zone_management": false, 00:10:41.533 "zone_append": false, 00:10:41.533 "compare": false, 00:10:41.533 "compare_and_write": false, 00:10:41.533 "abort": true, 00:10:41.533 "seek_hole": false, 00:10:41.533 "seek_data": false, 00:10:41.533 "copy": true, 00:10:41.533 "nvme_iov_md": false 00:10:41.533 }, 00:10:41.533 "memory_domains": [ 00:10:41.533 { 00:10:41.533 "dma_device_id": "system", 00:10:41.533 "dma_device_type": 1 00:10:41.533 }, 00:10:41.533 { 00:10:41.533 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:41.533 "dma_device_type": 2 00:10:41.533 } 00:10:41.533 ], 00:10:41.533 "driver_specific": {} 00:10:41.533 } 00:10:41.533 ] 00:10:41.533 13:20:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.533 13:20:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:41.533 13:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:41.533 13:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:41.533 13:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:41.533 13:20:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.533 13:20:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.533 BaseBdev4 00:10:41.533 13:20:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.533 13:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:10:41.533 13:20:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:41.533 13:20:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:41.533 13:20:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:41.533 13:20:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:41.533 13:20:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:41.533 13:20:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:41.533 13:20:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.533 13:20:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.533 13:20:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.533 13:20:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:41.533 13:20:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.533 13:20:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.533 [ 00:10:41.533 { 00:10:41.533 "name": "BaseBdev4", 00:10:41.533 "aliases": [ 00:10:41.533 "2705a28a-c4d9-4c26-bca6-71c57b97939f" 00:10:41.533 ], 00:10:41.533 "product_name": "Malloc disk", 00:10:41.533 "block_size": 512, 00:10:41.533 "num_blocks": 65536, 00:10:41.534 "uuid": "2705a28a-c4d9-4c26-bca6-71c57b97939f", 00:10:41.534 "assigned_rate_limits": { 00:10:41.534 "rw_ios_per_sec": 0, 00:10:41.534 "rw_mbytes_per_sec": 0, 00:10:41.534 "r_mbytes_per_sec": 0, 00:10:41.534 "w_mbytes_per_sec": 0 00:10:41.534 }, 00:10:41.534 "claimed": false, 00:10:41.534 "zoned": false, 00:10:41.534 "supported_io_types": { 00:10:41.534 "read": true, 00:10:41.534 "write": true, 00:10:41.534 "unmap": true, 00:10:41.794 "flush": true, 00:10:41.794 "reset": true, 00:10:41.794 "nvme_admin": false, 00:10:41.794 "nvme_io": false, 00:10:41.794 "nvme_io_md": false, 00:10:41.794 "write_zeroes": true, 00:10:41.794 "zcopy": true, 00:10:41.794 "get_zone_info": false, 00:10:41.794 "zone_management": false, 00:10:41.794 "zone_append": false, 00:10:41.794 "compare": false, 00:10:41.794 "compare_and_write": false, 00:10:41.794 "abort": true, 00:10:41.794 "seek_hole": false, 00:10:41.794 "seek_data": false, 00:10:41.794 "copy": true, 00:10:41.794 "nvme_iov_md": false 00:10:41.794 }, 00:10:41.794 "memory_domains": [ 00:10:41.794 { 00:10:41.794 "dma_device_id": "system", 00:10:41.794 "dma_device_type": 1 00:10:41.794 }, 00:10:41.794 { 00:10:41.794 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:41.794 "dma_device_type": 2 00:10:41.794 } 00:10:41.794 ], 00:10:41.794 "driver_specific": {} 00:10:41.794 } 00:10:41.794 ] 00:10:41.794 13:20:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.794 13:20:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:41.794 13:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:41.794 13:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:41.794 13:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:41.794 13:20:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.794 13:20:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.794 [2024-11-17 13:20:30.769555] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:41.794 [2024-11-17 13:20:30.769694] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:41.794 [2024-11-17 13:20:30.769769] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:41.794 [2024-11-17 13:20:30.771612] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:41.794 [2024-11-17 13:20:30.771700] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:41.794 13:20:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.794 13:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:41.794 13:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:41.794 13:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:41.794 13:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:41.794 13:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:41.794 13:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:41.794 13:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:41.794 13:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:41.794 13:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:41.794 13:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:41.794 13:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:41.794 13:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:41.794 13:20:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.794 13:20:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.794 13:20:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.794 13:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:41.794 "name": "Existed_Raid", 00:10:41.794 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:41.794 "strip_size_kb": 64, 00:10:41.794 "state": "configuring", 00:10:41.794 "raid_level": "concat", 00:10:41.794 "superblock": false, 00:10:41.794 "num_base_bdevs": 4, 00:10:41.794 "num_base_bdevs_discovered": 3, 00:10:41.794 "num_base_bdevs_operational": 4, 00:10:41.794 "base_bdevs_list": [ 00:10:41.794 { 00:10:41.794 "name": "BaseBdev1", 00:10:41.794 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:41.794 "is_configured": false, 00:10:41.794 "data_offset": 0, 00:10:41.794 "data_size": 0 00:10:41.794 }, 00:10:41.794 { 00:10:41.794 "name": "BaseBdev2", 00:10:41.794 "uuid": "bf6c46de-a00e-47a8-9f3f-37ec49b0858d", 00:10:41.794 "is_configured": true, 00:10:41.794 "data_offset": 0, 00:10:41.794 "data_size": 65536 00:10:41.794 }, 00:10:41.794 { 00:10:41.794 "name": "BaseBdev3", 00:10:41.794 "uuid": "ad80a461-12b4-4e8f-8890-17ac80c0eae3", 00:10:41.794 "is_configured": true, 00:10:41.794 "data_offset": 0, 00:10:41.794 "data_size": 65536 00:10:41.794 }, 00:10:41.794 { 00:10:41.794 "name": "BaseBdev4", 00:10:41.794 "uuid": "2705a28a-c4d9-4c26-bca6-71c57b97939f", 00:10:41.794 "is_configured": true, 00:10:41.794 "data_offset": 0, 00:10:41.794 "data_size": 65536 00:10:41.794 } 00:10:41.794 ] 00:10:41.794 }' 00:10:41.794 13:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:41.794 13:20:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.054 13:20:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:42.054 13:20:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.054 13:20:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.054 [2024-11-17 13:20:31.200863] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:42.054 13:20:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.054 13:20:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:42.054 13:20:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:42.054 13:20:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:42.054 13:20:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:42.054 13:20:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:42.054 13:20:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:42.054 13:20:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:42.054 13:20:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:42.054 13:20:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:42.054 13:20:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:42.054 13:20:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:42.054 13:20:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.054 13:20:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.054 13:20:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:42.054 13:20:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.054 13:20:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:42.054 "name": "Existed_Raid", 00:10:42.054 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:42.054 "strip_size_kb": 64, 00:10:42.054 "state": "configuring", 00:10:42.054 "raid_level": "concat", 00:10:42.054 "superblock": false, 00:10:42.054 "num_base_bdevs": 4, 00:10:42.054 "num_base_bdevs_discovered": 2, 00:10:42.054 "num_base_bdevs_operational": 4, 00:10:42.054 "base_bdevs_list": [ 00:10:42.054 { 00:10:42.054 "name": "BaseBdev1", 00:10:42.054 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:42.054 "is_configured": false, 00:10:42.054 "data_offset": 0, 00:10:42.054 "data_size": 0 00:10:42.054 }, 00:10:42.054 { 00:10:42.054 "name": null, 00:10:42.054 "uuid": "bf6c46de-a00e-47a8-9f3f-37ec49b0858d", 00:10:42.054 "is_configured": false, 00:10:42.054 "data_offset": 0, 00:10:42.054 "data_size": 65536 00:10:42.054 }, 00:10:42.054 { 00:10:42.054 "name": "BaseBdev3", 00:10:42.054 "uuid": "ad80a461-12b4-4e8f-8890-17ac80c0eae3", 00:10:42.054 "is_configured": true, 00:10:42.054 "data_offset": 0, 00:10:42.054 "data_size": 65536 00:10:42.054 }, 00:10:42.054 { 00:10:42.054 "name": "BaseBdev4", 00:10:42.054 "uuid": "2705a28a-c4d9-4c26-bca6-71c57b97939f", 00:10:42.054 "is_configured": true, 00:10:42.054 "data_offset": 0, 00:10:42.054 "data_size": 65536 00:10:42.054 } 00:10:42.054 ] 00:10:42.054 }' 00:10:42.054 13:20:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:42.054 13:20:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.644 13:20:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:42.644 13:20:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:42.644 13:20:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.644 13:20:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.644 13:20:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.645 13:20:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:42.645 13:20:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:42.645 13:20:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.645 13:20:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.645 [2024-11-17 13:20:31.752127] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:42.645 BaseBdev1 00:10:42.645 13:20:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.645 13:20:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:42.645 13:20:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:42.645 13:20:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:42.645 13:20:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:42.645 13:20:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:42.645 13:20:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:42.645 13:20:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:42.645 13:20:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.645 13:20:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.645 13:20:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.645 13:20:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:42.645 13:20:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.645 13:20:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.645 [ 00:10:42.645 { 00:10:42.645 "name": "BaseBdev1", 00:10:42.645 "aliases": [ 00:10:42.645 "82253d7a-cd4d-43cc-a5cc-81e5a6d31a22" 00:10:42.645 ], 00:10:42.645 "product_name": "Malloc disk", 00:10:42.645 "block_size": 512, 00:10:42.645 "num_blocks": 65536, 00:10:42.645 "uuid": "82253d7a-cd4d-43cc-a5cc-81e5a6d31a22", 00:10:42.645 "assigned_rate_limits": { 00:10:42.645 "rw_ios_per_sec": 0, 00:10:42.645 "rw_mbytes_per_sec": 0, 00:10:42.645 "r_mbytes_per_sec": 0, 00:10:42.645 "w_mbytes_per_sec": 0 00:10:42.645 }, 00:10:42.645 "claimed": true, 00:10:42.645 "claim_type": "exclusive_write", 00:10:42.645 "zoned": false, 00:10:42.645 "supported_io_types": { 00:10:42.645 "read": true, 00:10:42.645 "write": true, 00:10:42.645 "unmap": true, 00:10:42.645 "flush": true, 00:10:42.645 "reset": true, 00:10:42.645 "nvme_admin": false, 00:10:42.645 "nvme_io": false, 00:10:42.645 "nvme_io_md": false, 00:10:42.645 "write_zeroes": true, 00:10:42.645 "zcopy": true, 00:10:42.645 "get_zone_info": false, 00:10:42.645 "zone_management": false, 00:10:42.645 "zone_append": false, 00:10:42.645 "compare": false, 00:10:42.645 "compare_and_write": false, 00:10:42.645 "abort": true, 00:10:42.645 "seek_hole": false, 00:10:42.645 "seek_data": false, 00:10:42.645 "copy": true, 00:10:42.645 "nvme_iov_md": false 00:10:42.645 }, 00:10:42.645 "memory_domains": [ 00:10:42.645 { 00:10:42.645 "dma_device_id": "system", 00:10:42.645 "dma_device_type": 1 00:10:42.645 }, 00:10:42.645 { 00:10:42.645 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:42.645 "dma_device_type": 2 00:10:42.645 } 00:10:42.645 ], 00:10:42.645 "driver_specific": {} 00:10:42.645 } 00:10:42.645 ] 00:10:42.645 13:20:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.645 13:20:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:42.645 13:20:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:42.645 13:20:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:42.645 13:20:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:42.645 13:20:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:42.645 13:20:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:42.645 13:20:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:42.645 13:20:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:42.645 13:20:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:42.645 13:20:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:42.645 13:20:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:42.645 13:20:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:42.645 13:20:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:42.645 13:20:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.645 13:20:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.645 13:20:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.645 13:20:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:42.645 "name": "Existed_Raid", 00:10:42.645 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:42.645 "strip_size_kb": 64, 00:10:42.645 "state": "configuring", 00:10:42.645 "raid_level": "concat", 00:10:42.645 "superblock": false, 00:10:42.645 "num_base_bdevs": 4, 00:10:42.645 "num_base_bdevs_discovered": 3, 00:10:42.645 "num_base_bdevs_operational": 4, 00:10:42.645 "base_bdevs_list": [ 00:10:42.645 { 00:10:42.645 "name": "BaseBdev1", 00:10:42.645 "uuid": "82253d7a-cd4d-43cc-a5cc-81e5a6d31a22", 00:10:42.645 "is_configured": true, 00:10:42.645 "data_offset": 0, 00:10:42.645 "data_size": 65536 00:10:42.645 }, 00:10:42.645 { 00:10:42.645 "name": null, 00:10:42.645 "uuid": "bf6c46de-a00e-47a8-9f3f-37ec49b0858d", 00:10:42.645 "is_configured": false, 00:10:42.645 "data_offset": 0, 00:10:42.645 "data_size": 65536 00:10:42.645 }, 00:10:42.645 { 00:10:42.645 "name": "BaseBdev3", 00:10:42.645 "uuid": "ad80a461-12b4-4e8f-8890-17ac80c0eae3", 00:10:42.645 "is_configured": true, 00:10:42.645 "data_offset": 0, 00:10:42.645 "data_size": 65536 00:10:42.645 }, 00:10:42.645 { 00:10:42.645 "name": "BaseBdev4", 00:10:42.645 "uuid": "2705a28a-c4d9-4c26-bca6-71c57b97939f", 00:10:42.645 "is_configured": true, 00:10:42.645 "data_offset": 0, 00:10:42.645 "data_size": 65536 00:10:42.645 } 00:10:42.645 ] 00:10:42.645 }' 00:10:42.645 13:20:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:42.645 13:20:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.214 13:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:43.214 13:20:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.214 13:20:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.214 13:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:43.214 13:20:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.214 13:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:43.214 13:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:43.214 13:20:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.214 13:20:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.214 [2024-11-17 13:20:32.279318] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:43.214 13:20:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.214 13:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:43.214 13:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:43.214 13:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:43.214 13:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:43.214 13:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:43.214 13:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:43.214 13:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:43.214 13:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:43.214 13:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:43.214 13:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:43.214 13:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:43.214 13:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:43.214 13:20:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.214 13:20:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.214 13:20:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.214 13:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:43.214 "name": "Existed_Raid", 00:10:43.214 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:43.214 "strip_size_kb": 64, 00:10:43.214 "state": "configuring", 00:10:43.214 "raid_level": "concat", 00:10:43.214 "superblock": false, 00:10:43.214 "num_base_bdevs": 4, 00:10:43.214 "num_base_bdevs_discovered": 2, 00:10:43.214 "num_base_bdevs_operational": 4, 00:10:43.214 "base_bdevs_list": [ 00:10:43.214 { 00:10:43.214 "name": "BaseBdev1", 00:10:43.214 "uuid": "82253d7a-cd4d-43cc-a5cc-81e5a6d31a22", 00:10:43.214 "is_configured": true, 00:10:43.214 "data_offset": 0, 00:10:43.214 "data_size": 65536 00:10:43.214 }, 00:10:43.214 { 00:10:43.214 "name": null, 00:10:43.214 "uuid": "bf6c46de-a00e-47a8-9f3f-37ec49b0858d", 00:10:43.214 "is_configured": false, 00:10:43.214 "data_offset": 0, 00:10:43.214 "data_size": 65536 00:10:43.214 }, 00:10:43.214 { 00:10:43.214 "name": null, 00:10:43.214 "uuid": "ad80a461-12b4-4e8f-8890-17ac80c0eae3", 00:10:43.214 "is_configured": false, 00:10:43.214 "data_offset": 0, 00:10:43.214 "data_size": 65536 00:10:43.214 }, 00:10:43.214 { 00:10:43.214 "name": "BaseBdev4", 00:10:43.214 "uuid": "2705a28a-c4d9-4c26-bca6-71c57b97939f", 00:10:43.214 "is_configured": true, 00:10:43.214 "data_offset": 0, 00:10:43.214 "data_size": 65536 00:10:43.214 } 00:10:43.214 ] 00:10:43.214 }' 00:10:43.214 13:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:43.214 13:20:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.479 13:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:43.479 13:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:43.479 13:20:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.479 13:20:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.479 13:20:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.739 13:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:43.739 13:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:43.739 13:20:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.739 13:20:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.739 [2024-11-17 13:20:32.722583] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:43.739 13:20:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.739 13:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:43.739 13:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:43.739 13:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:43.739 13:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:43.739 13:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:43.739 13:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:43.739 13:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:43.739 13:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:43.739 13:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:43.739 13:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:43.739 13:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:43.739 13:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:43.739 13:20:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.739 13:20:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.739 13:20:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.739 13:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:43.739 "name": "Existed_Raid", 00:10:43.739 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:43.739 "strip_size_kb": 64, 00:10:43.739 "state": "configuring", 00:10:43.739 "raid_level": "concat", 00:10:43.739 "superblock": false, 00:10:43.739 "num_base_bdevs": 4, 00:10:43.739 "num_base_bdevs_discovered": 3, 00:10:43.739 "num_base_bdevs_operational": 4, 00:10:43.739 "base_bdevs_list": [ 00:10:43.739 { 00:10:43.739 "name": "BaseBdev1", 00:10:43.739 "uuid": "82253d7a-cd4d-43cc-a5cc-81e5a6d31a22", 00:10:43.739 "is_configured": true, 00:10:43.739 "data_offset": 0, 00:10:43.739 "data_size": 65536 00:10:43.739 }, 00:10:43.739 { 00:10:43.739 "name": null, 00:10:43.739 "uuid": "bf6c46de-a00e-47a8-9f3f-37ec49b0858d", 00:10:43.739 "is_configured": false, 00:10:43.739 "data_offset": 0, 00:10:43.739 "data_size": 65536 00:10:43.739 }, 00:10:43.739 { 00:10:43.739 "name": "BaseBdev3", 00:10:43.739 "uuid": "ad80a461-12b4-4e8f-8890-17ac80c0eae3", 00:10:43.739 "is_configured": true, 00:10:43.739 "data_offset": 0, 00:10:43.739 "data_size": 65536 00:10:43.739 }, 00:10:43.739 { 00:10:43.739 "name": "BaseBdev4", 00:10:43.739 "uuid": "2705a28a-c4d9-4c26-bca6-71c57b97939f", 00:10:43.739 "is_configured": true, 00:10:43.739 "data_offset": 0, 00:10:43.739 "data_size": 65536 00:10:43.739 } 00:10:43.739 ] 00:10:43.739 }' 00:10:43.739 13:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:43.739 13:20:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.998 13:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:43.998 13:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:43.998 13:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.998 13:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.998 13:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.998 13:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:43.998 13:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:43.998 13:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.998 13:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.998 [2024-11-17 13:20:33.221824] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:44.258 13:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.258 13:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:44.258 13:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:44.258 13:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:44.258 13:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:44.258 13:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:44.258 13:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:44.258 13:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:44.258 13:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:44.258 13:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:44.258 13:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:44.258 13:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:44.258 13:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:44.258 13:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.258 13:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.258 13:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.258 13:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:44.258 "name": "Existed_Raid", 00:10:44.258 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:44.258 "strip_size_kb": 64, 00:10:44.258 "state": "configuring", 00:10:44.258 "raid_level": "concat", 00:10:44.258 "superblock": false, 00:10:44.258 "num_base_bdevs": 4, 00:10:44.258 "num_base_bdevs_discovered": 2, 00:10:44.258 "num_base_bdevs_operational": 4, 00:10:44.258 "base_bdevs_list": [ 00:10:44.258 { 00:10:44.258 "name": null, 00:10:44.258 "uuid": "82253d7a-cd4d-43cc-a5cc-81e5a6d31a22", 00:10:44.258 "is_configured": false, 00:10:44.258 "data_offset": 0, 00:10:44.258 "data_size": 65536 00:10:44.258 }, 00:10:44.258 { 00:10:44.258 "name": null, 00:10:44.258 "uuid": "bf6c46de-a00e-47a8-9f3f-37ec49b0858d", 00:10:44.258 "is_configured": false, 00:10:44.258 "data_offset": 0, 00:10:44.258 "data_size": 65536 00:10:44.258 }, 00:10:44.258 { 00:10:44.258 "name": "BaseBdev3", 00:10:44.258 "uuid": "ad80a461-12b4-4e8f-8890-17ac80c0eae3", 00:10:44.258 "is_configured": true, 00:10:44.258 "data_offset": 0, 00:10:44.258 "data_size": 65536 00:10:44.258 }, 00:10:44.258 { 00:10:44.258 "name": "BaseBdev4", 00:10:44.258 "uuid": "2705a28a-c4d9-4c26-bca6-71c57b97939f", 00:10:44.258 "is_configured": true, 00:10:44.258 "data_offset": 0, 00:10:44.258 "data_size": 65536 00:10:44.258 } 00:10:44.258 ] 00:10:44.258 }' 00:10:44.258 13:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:44.258 13:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.517 13:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:44.517 13:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:44.517 13:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.517 13:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.517 13:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.775 13:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:44.775 13:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:44.775 13:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.775 13:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.775 [2024-11-17 13:20:33.762951] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:44.775 13:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.775 13:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:44.775 13:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:44.775 13:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:44.775 13:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:44.775 13:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:44.775 13:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:44.775 13:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:44.775 13:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:44.775 13:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:44.775 13:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:44.775 13:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:44.775 13:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.775 13:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:44.775 13:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.775 13:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.775 13:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:44.775 "name": "Existed_Raid", 00:10:44.775 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:44.775 "strip_size_kb": 64, 00:10:44.775 "state": "configuring", 00:10:44.775 "raid_level": "concat", 00:10:44.775 "superblock": false, 00:10:44.775 "num_base_bdevs": 4, 00:10:44.775 "num_base_bdevs_discovered": 3, 00:10:44.775 "num_base_bdevs_operational": 4, 00:10:44.775 "base_bdevs_list": [ 00:10:44.775 { 00:10:44.775 "name": null, 00:10:44.775 "uuid": "82253d7a-cd4d-43cc-a5cc-81e5a6d31a22", 00:10:44.775 "is_configured": false, 00:10:44.776 "data_offset": 0, 00:10:44.776 "data_size": 65536 00:10:44.776 }, 00:10:44.776 { 00:10:44.776 "name": "BaseBdev2", 00:10:44.776 "uuid": "bf6c46de-a00e-47a8-9f3f-37ec49b0858d", 00:10:44.776 "is_configured": true, 00:10:44.776 "data_offset": 0, 00:10:44.776 "data_size": 65536 00:10:44.776 }, 00:10:44.776 { 00:10:44.776 "name": "BaseBdev3", 00:10:44.776 "uuid": "ad80a461-12b4-4e8f-8890-17ac80c0eae3", 00:10:44.776 "is_configured": true, 00:10:44.776 "data_offset": 0, 00:10:44.776 "data_size": 65536 00:10:44.776 }, 00:10:44.776 { 00:10:44.776 "name": "BaseBdev4", 00:10:44.776 "uuid": "2705a28a-c4d9-4c26-bca6-71c57b97939f", 00:10:44.776 "is_configured": true, 00:10:44.776 "data_offset": 0, 00:10:44.776 "data_size": 65536 00:10:44.776 } 00:10:44.776 ] 00:10:44.776 }' 00:10:44.776 13:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:44.776 13:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.034 13:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.034 13:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:45.034 13:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.034 13:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.034 13:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.293 13:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:45.294 13:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.294 13:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:45.294 13:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.294 13:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.294 13:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.294 13:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 82253d7a-cd4d-43cc-a5cc-81e5a6d31a22 00:10:45.294 13:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.294 13:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.294 [2024-11-17 13:20:34.367031] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:45.294 [2024-11-17 13:20:34.367077] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:45.294 [2024-11-17 13:20:34.367085] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:10:45.294 [2024-11-17 13:20:34.367352] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:10:45.294 [2024-11-17 13:20:34.367509] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:45.294 [2024-11-17 13:20:34.367522] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:45.294 [2024-11-17 13:20:34.367800] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:45.294 NewBaseBdev 00:10:45.294 13:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.294 13:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:45.294 13:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:10:45.294 13:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:45.294 13:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:45.294 13:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:45.294 13:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:45.294 13:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:45.294 13:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.294 13:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.294 13:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.294 13:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:45.294 13:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.294 13:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.294 [ 00:10:45.294 { 00:10:45.294 "name": "NewBaseBdev", 00:10:45.294 "aliases": [ 00:10:45.294 "82253d7a-cd4d-43cc-a5cc-81e5a6d31a22" 00:10:45.294 ], 00:10:45.294 "product_name": "Malloc disk", 00:10:45.294 "block_size": 512, 00:10:45.294 "num_blocks": 65536, 00:10:45.294 "uuid": "82253d7a-cd4d-43cc-a5cc-81e5a6d31a22", 00:10:45.294 "assigned_rate_limits": { 00:10:45.294 "rw_ios_per_sec": 0, 00:10:45.294 "rw_mbytes_per_sec": 0, 00:10:45.294 "r_mbytes_per_sec": 0, 00:10:45.294 "w_mbytes_per_sec": 0 00:10:45.294 }, 00:10:45.294 "claimed": true, 00:10:45.294 "claim_type": "exclusive_write", 00:10:45.294 "zoned": false, 00:10:45.294 "supported_io_types": { 00:10:45.294 "read": true, 00:10:45.294 "write": true, 00:10:45.294 "unmap": true, 00:10:45.294 "flush": true, 00:10:45.294 "reset": true, 00:10:45.294 "nvme_admin": false, 00:10:45.294 "nvme_io": false, 00:10:45.294 "nvme_io_md": false, 00:10:45.294 "write_zeroes": true, 00:10:45.294 "zcopy": true, 00:10:45.294 "get_zone_info": false, 00:10:45.294 "zone_management": false, 00:10:45.294 "zone_append": false, 00:10:45.294 "compare": false, 00:10:45.294 "compare_and_write": false, 00:10:45.294 "abort": true, 00:10:45.294 "seek_hole": false, 00:10:45.294 "seek_data": false, 00:10:45.294 "copy": true, 00:10:45.294 "nvme_iov_md": false 00:10:45.294 }, 00:10:45.294 "memory_domains": [ 00:10:45.294 { 00:10:45.294 "dma_device_id": "system", 00:10:45.294 "dma_device_type": 1 00:10:45.294 }, 00:10:45.294 { 00:10:45.294 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:45.294 "dma_device_type": 2 00:10:45.294 } 00:10:45.294 ], 00:10:45.294 "driver_specific": {} 00:10:45.294 } 00:10:45.294 ] 00:10:45.294 13:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.294 13:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:45.294 13:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:10:45.294 13:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:45.294 13:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:45.294 13:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:45.294 13:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:45.294 13:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:45.294 13:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:45.294 13:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:45.294 13:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:45.294 13:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:45.294 13:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.294 13:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:45.294 13:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.294 13:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.294 13:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.294 13:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:45.294 "name": "Existed_Raid", 00:10:45.294 "uuid": "2ce99c2e-0f9e-488c-b29e-6d29752cfdb4", 00:10:45.294 "strip_size_kb": 64, 00:10:45.294 "state": "online", 00:10:45.294 "raid_level": "concat", 00:10:45.294 "superblock": false, 00:10:45.294 "num_base_bdevs": 4, 00:10:45.294 "num_base_bdevs_discovered": 4, 00:10:45.294 "num_base_bdevs_operational": 4, 00:10:45.294 "base_bdevs_list": [ 00:10:45.294 { 00:10:45.294 "name": "NewBaseBdev", 00:10:45.294 "uuid": "82253d7a-cd4d-43cc-a5cc-81e5a6d31a22", 00:10:45.294 "is_configured": true, 00:10:45.294 "data_offset": 0, 00:10:45.294 "data_size": 65536 00:10:45.294 }, 00:10:45.294 { 00:10:45.294 "name": "BaseBdev2", 00:10:45.294 "uuid": "bf6c46de-a00e-47a8-9f3f-37ec49b0858d", 00:10:45.294 "is_configured": true, 00:10:45.294 "data_offset": 0, 00:10:45.294 "data_size": 65536 00:10:45.294 }, 00:10:45.294 { 00:10:45.294 "name": "BaseBdev3", 00:10:45.294 "uuid": "ad80a461-12b4-4e8f-8890-17ac80c0eae3", 00:10:45.294 "is_configured": true, 00:10:45.294 "data_offset": 0, 00:10:45.294 "data_size": 65536 00:10:45.294 }, 00:10:45.294 { 00:10:45.294 "name": "BaseBdev4", 00:10:45.294 "uuid": "2705a28a-c4d9-4c26-bca6-71c57b97939f", 00:10:45.294 "is_configured": true, 00:10:45.294 "data_offset": 0, 00:10:45.294 "data_size": 65536 00:10:45.294 } 00:10:45.294 ] 00:10:45.294 }' 00:10:45.294 13:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:45.294 13:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.863 13:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:45.863 13:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:45.863 13:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:45.863 13:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:45.863 13:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:45.863 13:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:45.863 13:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:45.863 13:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.863 13:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.863 13:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:45.863 [2024-11-17 13:20:34.854608] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:45.863 13:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.863 13:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:45.863 "name": "Existed_Raid", 00:10:45.863 "aliases": [ 00:10:45.863 "2ce99c2e-0f9e-488c-b29e-6d29752cfdb4" 00:10:45.863 ], 00:10:45.863 "product_name": "Raid Volume", 00:10:45.863 "block_size": 512, 00:10:45.863 "num_blocks": 262144, 00:10:45.863 "uuid": "2ce99c2e-0f9e-488c-b29e-6d29752cfdb4", 00:10:45.863 "assigned_rate_limits": { 00:10:45.863 "rw_ios_per_sec": 0, 00:10:45.863 "rw_mbytes_per_sec": 0, 00:10:45.863 "r_mbytes_per_sec": 0, 00:10:45.863 "w_mbytes_per_sec": 0 00:10:45.863 }, 00:10:45.863 "claimed": false, 00:10:45.863 "zoned": false, 00:10:45.863 "supported_io_types": { 00:10:45.863 "read": true, 00:10:45.863 "write": true, 00:10:45.863 "unmap": true, 00:10:45.863 "flush": true, 00:10:45.863 "reset": true, 00:10:45.863 "nvme_admin": false, 00:10:45.863 "nvme_io": false, 00:10:45.863 "nvme_io_md": false, 00:10:45.863 "write_zeroes": true, 00:10:45.863 "zcopy": false, 00:10:45.863 "get_zone_info": false, 00:10:45.863 "zone_management": false, 00:10:45.863 "zone_append": false, 00:10:45.863 "compare": false, 00:10:45.863 "compare_and_write": false, 00:10:45.863 "abort": false, 00:10:45.863 "seek_hole": false, 00:10:45.863 "seek_data": false, 00:10:45.863 "copy": false, 00:10:45.863 "nvme_iov_md": false 00:10:45.863 }, 00:10:45.863 "memory_domains": [ 00:10:45.863 { 00:10:45.863 "dma_device_id": "system", 00:10:45.863 "dma_device_type": 1 00:10:45.863 }, 00:10:45.863 { 00:10:45.863 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:45.863 "dma_device_type": 2 00:10:45.864 }, 00:10:45.864 { 00:10:45.864 "dma_device_id": "system", 00:10:45.864 "dma_device_type": 1 00:10:45.864 }, 00:10:45.864 { 00:10:45.864 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:45.864 "dma_device_type": 2 00:10:45.864 }, 00:10:45.864 { 00:10:45.864 "dma_device_id": "system", 00:10:45.864 "dma_device_type": 1 00:10:45.864 }, 00:10:45.864 { 00:10:45.864 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:45.864 "dma_device_type": 2 00:10:45.864 }, 00:10:45.864 { 00:10:45.864 "dma_device_id": "system", 00:10:45.864 "dma_device_type": 1 00:10:45.864 }, 00:10:45.864 { 00:10:45.864 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:45.864 "dma_device_type": 2 00:10:45.864 } 00:10:45.864 ], 00:10:45.864 "driver_specific": { 00:10:45.864 "raid": { 00:10:45.864 "uuid": "2ce99c2e-0f9e-488c-b29e-6d29752cfdb4", 00:10:45.864 "strip_size_kb": 64, 00:10:45.864 "state": "online", 00:10:45.864 "raid_level": "concat", 00:10:45.864 "superblock": false, 00:10:45.864 "num_base_bdevs": 4, 00:10:45.864 "num_base_bdevs_discovered": 4, 00:10:45.864 "num_base_bdevs_operational": 4, 00:10:45.864 "base_bdevs_list": [ 00:10:45.864 { 00:10:45.864 "name": "NewBaseBdev", 00:10:45.864 "uuid": "82253d7a-cd4d-43cc-a5cc-81e5a6d31a22", 00:10:45.864 "is_configured": true, 00:10:45.864 "data_offset": 0, 00:10:45.864 "data_size": 65536 00:10:45.864 }, 00:10:45.864 { 00:10:45.864 "name": "BaseBdev2", 00:10:45.864 "uuid": "bf6c46de-a00e-47a8-9f3f-37ec49b0858d", 00:10:45.864 "is_configured": true, 00:10:45.864 "data_offset": 0, 00:10:45.864 "data_size": 65536 00:10:45.864 }, 00:10:45.864 { 00:10:45.864 "name": "BaseBdev3", 00:10:45.864 "uuid": "ad80a461-12b4-4e8f-8890-17ac80c0eae3", 00:10:45.864 "is_configured": true, 00:10:45.864 "data_offset": 0, 00:10:45.864 "data_size": 65536 00:10:45.864 }, 00:10:45.864 { 00:10:45.864 "name": "BaseBdev4", 00:10:45.864 "uuid": "2705a28a-c4d9-4c26-bca6-71c57b97939f", 00:10:45.864 "is_configured": true, 00:10:45.864 "data_offset": 0, 00:10:45.864 "data_size": 65536 00:10:45.864 } 00:10:45.864 ] 00:10:45.864 } 00:10:45.864 } 00:10:45.864 }' 00:10:45.864 13:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:45.864 13:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:45.864 BaseBdev2 00:10:45.864 BaseBdev3 00:10:45.864 BaseBdev4' 00:10:45.864 13:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:45.864 13:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:45.864 13:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:45.864 13:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:45.864 13:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:45.864 13:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.864 13:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.864 13:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.864 13:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:45.864 13:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:45.864 13:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:45.864 13:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:45.864 13:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:45.864 13:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.864 13:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.864 13:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.864 13:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:45.864 13:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:45.864 13:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:45.864 13:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:45.864 13:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:45.864 13:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.864 13:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.864 13:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.124 13:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:46.124 13:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:46.124 13:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:46.124 13:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:46.124 13:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.124 13:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.124 13:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:46.124 13:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.124 13:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:46.124 13:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:46.124 13:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:46.124 13:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.124 13:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.124 [2024-11-17 13:20:35.165729] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:46.124 [2024-11-17 13:20:35.165763] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:46.124 [2024-11-17 13:20:35.165847] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:46.124 [2024-11-17 13:20:35.165917] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:46.124 [2024-11-17 13:20:35.165928] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:46.124 13:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.124 13:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 71193 00:10:46.124 13:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 71193 ']' 00:10:46.124 13:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 71193 00:10:46.124 13:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:10:46.124 13:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:46.124 13:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71193 00:10:46.124 killing process with pid 71193 00:10:46.124 13:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:46.124 13:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:46.124 13:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71193' 00:10:46.124 13:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 71193 00:10:46.124 [2024-11-17 13:20:35.213346] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:46.124 13:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 71193 00:10:46.693 [2024-11-17 13:20:35.614883] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:47.632 13:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:10:47.632 00:10:47.632 real 0m11.573s 00:10:47.632 user 0m18.283s 00:10:47.632 sys 0m2.202s 00:10:47.632 ************************************ 00:10:47.632 END TEST raid_state_function_test 00:10:47.633 ************************************ 00:10:47.633 13:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:47.633 13:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.633 13:20:36 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:10:47.633 13:20:36 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:47.633 13:20:36 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:47.633 13:20:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:47.633 ************************************ 00:10:47.633 START TEST raid_state_function_test_sb 00:10:47.633 ************************************ 00:10:47.633 13:20:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 4 true 00:10:47.633 13:20:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:10:47.633 13:20:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:10:47.633 13:20:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:10:47.633 13:20:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:47.633 13:20:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:47.633 13:20:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:47.633 13:20:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:47.633 13:20:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:47.633 13:20:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:47.633 13:20:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:47.633 13:20:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:47.633 13:20:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:47.633 13:20:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:47.633 13:20:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:47.633 13:20:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:47.633 13:20:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:10:47.633 13:20:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:47.633 13:20:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:47.633 13:20:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:47.633 13:20:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:47.633 13:20:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:47.633 13:20:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:47.633 13:20:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:47.633 13:20:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:47.633 13:20:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:10:47.633 13:20:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:47.633 13:20:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:47.633 13:20:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:10:47.633 13:20:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:10:47.633 13:20:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=71864 00:10:47.633 13:20:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:47.633 Process raid pid: 71864 00:10:47.633 13:20:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 71864' 00:10:47.633 13:20:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 71864 00:10:47.633 13:20:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 71864 ']' 00:10:47.633 13:20:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:47.633 13:20:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:47.633 13:20:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:47.633 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:47.633 13:20:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:47.633 13:20:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.893 [2024-11-17 13:20:36.910194] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:10:47.893 [2024-11-17 13:20:36.910407] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:47.893 [2024-11-17 13:20:37.093188] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:48.153 [2024-11-17 13:20:37.204687] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:48.441 [2024-11-17 13:20:37.416929] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:48.441 [2024-11-17 13:20:37.416966] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:48.708 13:20:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:48.708 13:20:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:10:48.708 13:20:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:48.708 13:20:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.708 13:20:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.708 [2024-11-17 13:20:37.756404] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:48.708 [2024-11-17 13:20:37.756456] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:48.708 [2024-11-17 13:20:37.756466] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:48.708 [2024-11-17 13:20:37.756476] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:48.708 [2024-11-17 13:20:37.756482] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:48.708 [2024-11-17 13:20:37.756490] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:48.708 [2024-11-17 13:20:37.756496] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:48.708 [2024-11-17 13:20:37.756504] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:48.708 13:20:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.708 13:20:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:48.708 13:20:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:48.708 13:20:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:48.708 13:20:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:48.708 13:20:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:48.708 13:20:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:48.708 13:20:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:48.709 13:20:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:48.709 13:20:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:48.709 13:20:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:48.709 13:20:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.709 13:20:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:48.709 13:20:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.709 13:20:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.709 13:20:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.709 13:20:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:48.709 "name": "Existed_Raid", 00:10:48.709 "uuid": "87b49639-c7d9-4344-ac10-ff6edaba95f7", 00:10:48.709 "strip_size_kb": 64, 00:10:48.709 "state": "configuring", 00:10:48.709 "raid_level": "concat", 00:10:48.709 "superblock": true, 00:10:48.709 "num_base_bdevs": 4, 00:10:48.709 "num_base_bdevs_discovered": 0, 00:10:48.709 "num_base_bdevs_operational": 4, 00:10:48.709 "base_bdevs_list": [ 00:10:48.709 { 00:10:48.709 "name": "BaseBdev1", 00:10:48.709 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:48.709 "is_configured": false, 00:10:48.709 "data_offset": 0, 00:10:48.709 "data_size": 0 00:10:48.709 }, 00:10:48.709 { 00:10:48.709 "name": "BaseBdev2", 00:10:48.709 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:48.709 "is_configured": false, 00:10:48.709 "data_offset": 0, 00:10:48.709 "data_size": 0 00:10:48.709 }, 00:10:48.709 { 00:10:48.709 "name": "BaseBdev3", 00:10:48.709 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:48.709 "is_configured": false, 00:10:48.709 "data_offset": 0, 00:10:48.709 "data_size": 0 00:10:48.709 }, 00:10:48.709 { 00:10:48.709 "name": "BaseBdev4", 00:10:48.709 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:48.709 "is_configured": false, 00:10:48.709 "data_offset": 0, 00:10:48.709 "data_size": 0 00:10:48.709 } 00:10:48.709 ] 00:10:48.709 }' 00:10:48.709 13:20:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:48.709 13:20:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.968 13:20:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:48.968 13:20:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.968 13:20:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.968 [2024-11-17 13:20:38.187587] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:48.968 [2024-11-17 13:20:38.187674] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:48.968 13:20:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.968 13:20:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:49.228 13:20:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.228 13:20:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.228 [2024-11-17 13:20:38.199579] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:49.228 [2024-11-17 13:20:38.199654] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:49.228 [2024-11-17 13:20:38.199680] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:49.228 [2024-11-17 13:20:38.199702] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:49.228 [2024-11-17 13:20:38.199720] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:49.228 [2024-11-17 13:20:38.199740] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:49.228 [2024-11-17 13:20:38.199757] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:49.228 [2024-11-17 13:20:38.199777] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:49.228 13:20:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.228 13:20:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:49.228 13:20:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.228 13:20:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.228 [2024-11-17 13:20:38.245722] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:49.228 BaseBdev1 00:10:49.228 13:20:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.228 13:20:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:49.228 13:20:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:49.228 13:20:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:49.228 13:20:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:49.228 13:20:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:49.228 13:20:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:49.228 13:20:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:49.228 13:20:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.228 13:20:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.228 13:20:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.228 13:20:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:49.228 13:20:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.228 13:20:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.228 [ 00:10:49.228 { 00:10:49.228 "name": "BaseBdev1", 00:10:49.228 "aliases": [ 00:10:49.228 "878de6f2-cc2a-4c62-958b-99ef1a849d4e" 00:10:49.228 ], 00:10:49.228 "product_name": "Malloc disk", 00:10:49.228 "block_size": 512, 00:10:49.228 "num_blocks": 65536, 00:10:49.228 "uuid": "878de6f2-cc2a-4c62-958b-99ef1a849d4e", 00:10:49.228 "assigned_rate_limits": { 00:10:49.228 "rw_ios_per_sec": 0, 00:10:49.228 "rw_mbytes_per_sec": 0, 00:10:49.228 "r_mbytes_per_sec": 0, 00:10:49.228 "w_mbytes_per_sec": 0 00:10:49.228 }, 00:10:49.228 "claimed": true, 00:10:49.228 "claim_type": "exclusive_write", 00:10:49.228 "zoned": false, 00:10:49.228 "supported_io_types": { 00:10:49.228 "read": true, 00:10:49.228 "write": true, 00:10:49.228 "unmap": true, 00:10:49.228 "flush": true, 00:10:49.228 "reset": true, 00:10:49.228 "nvme_admin": false, 00:10:49.228 "nvme_io": false, 00:10:49.228 "nvme_io_md": false, 00:10:49.228 "write_zeroes": true, 00:10:49.228 "zcopy": true, 00:10:49.228 "get_zone_info": false, 00:10:49.228 "zone_management": false, 00:10:49.228 "zone_append": false, 00:10:49.228 "compare": false, 00:10:49.228 "compare_and_write": false, 00:10:49.228 "abort": true, 00:10:49.228 "seek_hole": false, 00:10:49.228 "seek_data": false, 00:10:49.228 "copy": true, 00:10:49.228 "nvme_iov_md": false 00:10:49.228 }, 00:10:49.228 "memory_domains": [ 00:10:49.228 { 00:10:49.228 "dma_device_id": "system", 00:10:49.228 "dma_device_type": 1 00:10:49.228 }, 00:10:49.228 { 00:10:49.228 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:49.228 "dma_device_type": 2 00:10:49.228 } 00:10:49.228 ], 00:10:49.228 "driver_specific": {} 00:10:49.228 } 00:10:49.228 ] 00:10:49.228 13:20:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.228 13:20:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:49.228 13:20:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:49.228 13:20:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:49.228 13:20:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:49.228 13:20:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:49.228 13:20:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:49.228 13:20:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:49.228 13:20:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:49.228 13:20:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:49.229 13:20:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:49.229 13:20:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:49.229 13:20:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:49.229 13:20:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:49.229 13:20:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.229 13:20:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.229 13:20:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.229 13:20:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:49.229 "name": "Existed_Raid", 00:10:49.229 "uuid": "a1ccc28b-c0a4-4e39-a072-628c5dc55844", 00:10:49.229 "strip_size_kb": 64, 00:10:49.229 "state": "configuring", 00:10:49.229 "raid_level": "concat", 00:10:49.229 "superblock": true, 00:10:49.229 "num_base_bdevs": 4, 00:10:49.229 "num_base_bdevs_discovered": 1, 00:10:49.229 "num_base_bdevs_operational": 4, 00:10:49.229 "base_bdevs_list": [ 00:10:49.229 { 00:10:49.229 "name": "BaseBdev1", 00:10:49.229 "uuid": "878de6f2-cc2a-4c62-958b-99ef1a849d4e", 00:10:49.229 "is_configured": true, 00:10:49.229 "data_offset": 2048, 00:10:49.229 "data_size": 63488 00:10:49.229 }, 00:10:49.229 { 00:10:49.229 "name": "BaseBdev2", 00:10:49.229 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:49.229 "is_configured": false, 00:10:49.229 "data_offset": 0, 00:10:49.229 "data_size": 0 00:10:49.229 }, 00:10:49.229 { 00:10:49.229 "name": "BaseBdev3", 00:10:49.229 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:49.229 "is_configured": false, 00:10:49.229 "data_offset": 0, 00:10:49.229 "data_size": 0 00:10:49.229 }, 00:10:49.229 { 00:10:49.229 "name": "BaseBdev4", 00:10:49.229 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:49.229 "is_configured": false, 00:10:49.229 "data_offset": 0, 00:10:49.229 "data_size": 0 00:10:49.229 } 00:10:49.229 ] 00:10:49.229 }' 00:10:49.229 13:20:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:49.229 13:20:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.798 13:20:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:49.798 13:20:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.798 13:20:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.798 [2024-11-17 13:20:38.724965] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:49.798 [2024-11-17 13:20:38.725009] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:49.798 13:20:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.798 13:20:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:49.798 13:20:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.799 13:20:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.799 [2024-11-17 13:20:38.737009] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:49.799 [2024-11-17 13:20:38.738928] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:49.799 [2024-11-17 13:20:38.738970] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:49.799 [2024-11-17 13:20:38.738980] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:49.799 [2024-11-17 13:20:38.738992] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:49.799 [2024-11-17 13:20:38.738999] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:49.799 [2024-11-17 13:20:38.739007] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:49.799 13:20:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.799 13:20:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:49.799 13:20:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:49.799 13:20:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:49.799 13:20:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:49.799 13:20:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:49.799 13:20:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:49.799 13:20:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:49.799 13:20:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:49.799 13:20:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:49.799 13:20:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:49.799 13:20:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:49.799 13:20:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:49.799 13:20:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:49.799 13:20:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:49.799 13:20:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.799 13:20:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.799 13:20:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.799 13:20:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:49.799 "name": "Existed_Raid", 00:10:49.799 "uuid": "6689dbe2-8baa-4819-b58d-1a14b0a49c63", 00:10:49.799 "strip_size_kb": 64, 00:10:49.799 "state": "configuring", 00:10:49.799 "raid_level": "concat", 00:10:49.799 "superblock": true, 00:10:49.799 "num_base_bdevs": 4, 00:10:49.799 "num_base_bdevs_discovered": 1, 00:10:49.799 "num_base_bdevs_operational": 4, 00:10:49.799 "base_bdevs_list": [ 00:10:49.799 { 00:10:49.799 "name": "BaseBdev1", 00:10:49.799 "uuid": "878de6f2-cc2a-4c62-958b-99ef1a849d4e", 00:10:49.799 "is_configured": true, 00:10:49.799 "data_offset": 2048, 00:10:49.799 "data_size": 63488 00:10:49.799 }, 00:10:49.799 { 00:10:49.799 "name": "BaseBdev2", 00:10:49.799 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:49.799 "is_configured": false, 00:10:49.799 "data_offset": 0, 00:10:49.799 "data_size": 0 00:10:49.799 }, 00:10:49.799 { 00:10:49.799 "name": "BaseBdev3", 00:10:49.799 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:49.799 "is_configured": false, 00:10:49.799 "data_offset": 0, 00:10:49.799 "data_size": 0 00:10:49.799 }, 00:10:49.799 { 00:10:49.799 "name": "BaseBdev4", 00:10:49.799 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:49.799 "is_configured": false, 00:10:49.799 "data_offset": 0, 00:10:49.799 "data_size": 0 00:10:49.799 } 00:10:49.799 ] 00:10:49.799 }' 00:10:49.799 13:20:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:49.799 13:20:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.059 13:20:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:50.059 13:20:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.059 13:20:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.059 [2024-11-17 13:20:39.234172] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:50.059 BaseBdev2 00:10:50.059 13:20:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.059 13:20:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:50.059 13:20:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:50.059 13:20:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:50.059 13:20:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:50.059 13:20:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:50.059 13:20:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:50.059 13:20:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:50.059 13:20:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.059 13:20:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.059 13:20:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.059 13:20:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:50.059 13:20:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.059 13:20:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.059 [ 00:10:50.059 { 00:10:50.059 "name": "BaseBdev2", 00:10:50.059 "aliases": [ 00:10:50.059 "92d906cb-18c4-4510-b1b0-145bda364b83" 00:10:50.059 ], 00:10:50.059 "product_name": "Malloc disk", 00:10:50.059 "block_size": 512, 00:10:50.059 "num_blocks": 65536, 00:10:50.059 "uuid": "92d906cb-18c4-4510-b1b0-145bda364b83", 00:10:50.059 "assigned_rate_limits": { 00:10:50.059 "rw_ios_per_sec": 0, 00:10:50.059 "rw_mbytes_per_sec": 0, 00:10:50.059 "r_mbytes_per_sec": 0, 00:10:50.059 "w_mbytes_per_sec": 0 00:10:50.059 }, 00:10:50.059 "claimed": true, 00:10:50.059 "claim_type": "exclusive_write", 00:10:50.059 "zoned": false, 00:10:50.059 "supported_io_types": { 00:10:50.059 "read": true, 00:10:50.059 "write": true, 00:10:50.059 "unmap": true, 00:10:50.059 "flush": true, 00:10:50.059 "reset": true, 00:10:50.059 "nvme_admin": false, 00:10:50.059 "nvme_io": false, 00:10:50.059 "nvme_io_md": false, 00:10:50.059 "write_zeroes": true, 00:10:50.059 "zcopy": true, 00:10:50.059 "get_zone_info": false, 00:10:50.059 "zone_management": false, 00:10:50.059 "zone_append": false, 00:10:50.059 "compare": false, 00:10:50.059 "compare_and_write": false, 00:10:50.059 "abort": true, 00:10:50.059 "seek_hole": false, 00:10:50.059 "seek_data": false, 00:10:50.059 "copy": true, 00:10:50.059 "nvme_iov_md": false 00:10:50.059 }, 00:10:50.059 "memory_domains": [ 00:10:50.059 { 00:10:50.059 "dma_device_id": "system", 00:10:50.059 "dma_device_type": 1 00:10:50.059 }, 00:10:50.059 { 00:10:50.059 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:50.059 "dma_device_type": 2 00:10:50.059 } 00:10:50.059 ], 00:10:50.059 "driver_specific": {} 00:10:50.059 } 00:10:50.059 ] 00:10:50.059 13:20:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.059 13:20:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:50.059 13:20:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:50.059 13:20:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:50.059 13:20:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:50.059 13:20:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:50.059 13:20:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:50.059 13:20:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:50.059 13:20:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:50.059 13:20:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:50.059 13:20:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:50.059 13:20:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:50.059 13:20:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:50.059 13:20:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:50.059 13:20:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:50.318 13:20:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:50.318 13:20:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.318 13:20:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.318 13:20:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.318 13:20:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:50.318 "name": "Existed_Raid", 00:10:50.318 "uuid": "6689dbe2-8baa-4819-b58d-1a14b0a49c63", 00:10:50.318 "strip_size_kb": 64, 00:10:50.318 "state": "configuring", 00:10:50.318 "raid_level": "concat", 00:10:50.318 "superblock": true, 00:10:50.318 "num_base_bdevs": 4, 00:10:50.318 "num_base_bdevs_discovered": 2, 00:10:50.318 "num_base_bdevs_operational": 4, 00:10:50.318 "base_bdevs_list": [ 00:10:50.318 { 00:10:50.318 "name": "BaseBdev1", 00:10:50.318 "uuid": "878de6f2-cc2a-4c62-958b-99ef1a849d4e", 00:10:50.318 "is_configured": true, 00:10:50.318 "data_offset": 2048, 00:10:50.318 "data_size": 63488 00:10:50.318 }, 00:10:50.318 { 00:10:50.318 "name": "BaseBdev2", 00:10:50.318 "uuid": "92d906cb-18c4-4510-b1b0-145bda364b83", 00:10:50.318 "is_configured": true, 00:10:50.318 "data_offset": 2048, 00:10:50.318 "data_size": 63488 00:10:50.318 }, 00:10:50.318 { 00:10:50.318 "name": "BaseBdev3", 00:10:50.318 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:50.318 "is_configured": false, 00:10:50.318 "data_offset": 0, 00:10:50.318 "data_size": 0 00:10:50.318 }, 00:10:50.318 { 00:10:50.318 "name": "BaseBdev4", 00:10:50.318 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:50.318 "is_configured": false, 00:10:50.318 "data_offset": 0, 00:10:50.318 "data_size": 0 00:10:50.318 } 00:10:50.318 ] 00:10:50.318 }' 00:10:50.318 13:20:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:50.318 13:20:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.578 13:20:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:50.578 13:20:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.578 13:20:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.578 [2024-11-17 13:20:39.719547] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:50.578 BaseBdev3 00:10:50.578 13:20:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.578 13:20:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:50.578 13:20:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:50.578 13:20:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:50.578 13:20:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:50.578 13:20:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:50.578 13:20:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:50.578 13:20:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:50.578 13:20:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.578 13:20:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.578 13:20:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.578 13:20:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:50.578 13:20:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.578 13:20:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.578 [ 00:10:50.578 { 00:10:50.578 "name": "BaseBdev3", 00:10:50.578 "aliases": [ 00:10:50.578 "430d82f6-b6b1-412b-b039-ac4553bc2636" 00:10:50.578 ], 00:10:50.578 "product_name": "Malloc disk", 00:10:50.578 "block_size": 512, 00:10:50.578 "num_blocks": 65536, 00:10:50.578 "uuid": "430d82f6-b6b1-412b-b039-ac4553bc2636", 00:10:50.578 "assigned_rate_limits": { 00:10:50.578 "rw_ios_per_sec": 0, 00:10:50.578 "rw_mbytes_per_sec": 0, 00:10:50.578 "r_mbytes_per_sec": 0, 00:10:50.578 "w_mbytes_per_sec": 0 00:10:50.578 }, 00:10:50.578 "claimed": true, 00:10:50.578 "claim_type": "exclusive_write", 00:10:50.578 "zoned": false, 00:10:50.578 "supported_io_types": { 00:10:50.578 "read": true, 00:10:50.578 "write": true, 00:10:50.578 "unmap": true, 00:10:50.578 "flush": true, 00:10:50.578 "reset": true, 00:10:50.578 "nvme_admin": false, 00:10:50.578 "nvme_io": false, 00:10:50.578 "nvme_io_md": false, 00:10:50.578 "write_zeroes": true, 00:10:50.578 "zcopy": true, 00:10:50.578 "get_zone_info": false, 00:10:50.578 "zone_management": false, 00:10:50.578 "zone_append": false, 00:10:50.578 "compare": false, 00:10:50.578 "compare_and_write": false, 00:10:50.578 "abort": true, 00:10:50.578 "seek_hole": false, 00:10:50.578 "seek_data": false, 00:10:50.578 "copy": true, 00:10:50.578 "nvme_iov_md": false 00:10:50.578 }, 00:10:50.578 "memory_domains": [ 00:10:50.578 { 00:10:50.578 "dma_device_id": "system", 00:10:50.578 "dma_device_type": 1 00:10:50.578 }, 00:10:50.578 { 00:10:50.578 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:50.578 "dma_device_type": 2 00:10:50.578 } 00:10:50.578 ], 00:10:50.578 "driver_specific": {} 00:10:50.578 } 00:10:50.578 ] 00:10:50.578 13:20:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.578 13:20:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:50.578 13:20:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:50.578 13:20:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:50.578 13:20:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:50.578 13:20:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:50.578 13:20:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:50.578 13:20:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:50.578 13:20:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:50.578 13:20:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:50.578 13:20:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:50.578 13:20:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:50.578 13:20:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:50.578 13:20:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:50.578 13:20:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:50.578 13:20:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.578 13:20:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.578 13:20:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:50.578 13:20:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.838 13:20:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:50.838 "name": "Existed_Raid", 00:10:50.838 "uuid": "6689dbe2-8baa-4819-b58d-1a14b0a49c63", 00:10:50.838 "strip_size_kb": 64, 00:10:50.838 "state": "configuring", 00:10:50.838 "raid_level": "concat", 00:10:50.838 "superblock": true, 00:10:50.838 "num_base_bdevs": 4, 00:10:50.838 "num_base_bdevs_discovered": 3, 00:10:50.838 "num_base_bdevs_operational": 4, 00:10:50.838 "base_bdevs_list": [ 00:10:50.838 { 00:10:50.838 "name": "BaseBdev1", 00:10:50.838 "uuid": "878de6f2-cc2a-4c62-958b-99ef1a849d4e", 00:10:50.838 "is_configured": true, 00:10:50.838 "data_offset": 2048, 00:10:50.838 "data_size": 63488 00:10:50.838 }, 00:10:50.838 { 00:10:50.838 "name": "BaseBdev2", 00:10:50.838 "uuid": "92d906cb-18c4-4510-b1b0-145bda364b83", 00:10:50.838 "is_configured": true, 00:10:50.838 "data_offset": 2048, 00:10:50.838 "data_size": 63488 00:10:50.838 }, 00:10:50.838 { 00:10:50.838 "name": "BaseBdev3", 00:10:50.838 "uuid": "430d82f6-b6b1-412b-b039-ac4553bc2636", 00:10:50.838 "is_configured": true, 00:10:50.838 "data_offset": 2048, 00:10:50.838 "data_size": 63488 00:10:50.838 }, 00:10:50.838 { 00:10:50.838 "name": "BaseBdev4", 00:10:50.838 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:50.838 "is_configured": false, 00:10:50.838 "data_offset": 0, 00:10:50.838 "data_size": 0 00:10:50.838 } 00:10:50.838 ] 00:10:50.838 }' 00:10:50.838 13:20:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:50.838 13:20:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.098 13:20:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:51.098 13:20:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.098 13:20:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.098 [2024-11-17 13:20:40.232875] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:51.098 [2024-11-17 13:20:40.233271] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:51.098 [2024-11-17 13:20:40.233324] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:51.098 [2024-11-17 13:20:40.233641] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:51.098 [2024-11-17 13:20:40.233874] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:51.098 BaseBdev4 00:10:51.098 [2024-11-17 13:20:40.233923] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:51.098 [2024-11-17 13:20:40.234080] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:51.098 13:20:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.098 13:20:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:51.098 13:20:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:51.098 13:20:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:51.098 13:20:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:51.098 13:20:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:51.098 13:20:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:51.098 13:20:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:51.098 13:20:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.098 13:20:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.098 13:20:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.098 13:20:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:51.098 13:20:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.098 13:20:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.098 [ 00:10:51.098 { 00:10:51.098 "name": "BaseBdev4", 00:10:51.098 "aliases": [ 00:10:51.098 "e82370c8-3ef8-4381-9367-d9b409d7819b" 00:10:51.098 ], 00:10:51.098 "product_name": "Malloc disk", 00:10:51.098 "block_size": 512, 00:10:51.098 "num_blocks": 65536, 00:10:51.098 "uuid": "e82370c8-3ef8-4381-9367-d9b409d7819b", 00:10:51.098 "assigned_rate_limits": { 00:10:51.098 "rw_ios_per_sec": 0, 00:10:51.098 "rw_mbytes_per_sec": 0, 00:10:51.098 "r_mbytes_per_sec": 0, 00:10:51.098 "w_mbytes_per_sec": 0 00:10:51.098 }, 00:10:51.098 "claimed": true, 00:10:51.098 "claim_type": "exclusive_write", 00:10:51.098 "zoned": false, 00:10:51.098 "supported_io_types": { 00:10:51.098 "read": true, 00:10:51.098 "write": true, 00:10:51.098 "unmap": true, 00:10:51.098 "flush": true, 00:10:51.098 "reset": true, 00:10:51.098 "nvme_admin": false, 00:10:51.098 "nvme_io": false, 00:10:51.098 "nvme_io_md": false, 00:10:51.098 "write_zeroes": true, 00:10:51.098 "zcopy": true, 00:10:51.098 "get_zone_info": false, 00:10:51.098 "zone_management": false, 00:10:51.098 "zone_append": false, 00:10:51.098 "compare": false, 00:10:51.098 "compare_and_write": false, 00:10:51.098 "abort": true, 00:10:51.098 "seek_hole": false, 00:10:51.098 "seek_data": false, 00:10:51.098 "copy": true, 00:10:51.098 "nvme_iov_md": false 00:10:51.098 }, 00:10:51.098 "memory_domains": [ 00:10:51.098 { 00:10:51.098 "dma_device_id": "system", 00:10:51.098 "dma_device_type": 1 00:10:51.098 }, 00:10:51.098 { 00:10:51.098 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:51.098 "dma_device_type": 2 00:10:51.098 } 00:10:51.098 ], 00:10:51.098 "driver_specific": {} 00:10:51.098 } 00:10:51.098 ] 00:10:51.099 13:20:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.099 13:20:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:51.099 13:20:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:51.099 13:20:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:51.099 13:20:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:10:51.099 13:20:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:51.099 13:20:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:51.099 13:20:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:51.099 13:20:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:51.099 13:20:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:51.099 13:20:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:51.099 13:20:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:51.099 13:20:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:51.099 13:20:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:51.099 13:20:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:51.099 13:20:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.099 13:20:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.099 13:20:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:51.099 13:20:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.359 13:20:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:51.359 "name": "Existed_Raid", 00:10:51.359 "uuid": "6689dbe2-8baa-4819-b58d-1a14b0a49c63", 00:10:51.359 "strip_size_kb": 64, 00:10:51.359 "state": "online", 00:10:51.359 "raid_level": "concat", 00:10:51.359 "superblock": true, 00:10:51.359 "num_base_bdevs": 4, 00:10:51.359 "num_base_bdevs_discovered": 4, 00:10:51.359 "num_base_bdevs_operational": 4, 00:10:51.359 "base_bdevs_list": [ 00:10:51.359 { 00:10:51.359 "name": "BaseBdev1", 00:10:51.359 "uuid": "878de6f2-cc2a-4c62-958b-99ef1a849d4e", 00:10:51.359 "is_configured": true, 00:10:51.359 "data_offset": 2048, 00:10:51.359 "data_size": 63488 00:10:51.359 }, 00:10:51.359 { 00:10:51.359 "name": "BaseBdev2", 00:10:51.359 "uuid": "92d906cb-18c4-4510-b1b0-145bda364b83", 00:10:51.359 "is_configured": true, 00:10:51.359 "data_offset": 2048, 00:10:51.359 "data_size": 63488 00:10:51.359 }, 00:10:51.359 { 00:10:51.359 "name": "BaseBdev3", 00:10:51.359 "uuid": "430d82f6-b6b1-412b-b039-ac4553bc2636", 00:10:51.359 "is_configured": true, 00:10:51.359 "data_offset": 2048, 00:10:51.359 "data_size": 63488 00:10:51.359 }, 00:10:51.359 { 00:10:51.359 "name": "BaseBdev4", 00:10:51.359 "uuid": "e82370c8-3ef8-4381-9367-d9b409d7819b", 00:10:51.359 "is_configured": true, 00:10:51.359 "data_offset": 2048, 00:10:51.359 "data_size": 63488 00:10:51.359 } 00:10:51.359 ] 00:10:51.359 }' 00:10:51.359 13:20:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:51.359 13:20:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.619 13:20:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:51.619 13:20:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:51.619 13:20:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:51.619 13:20:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:51.619 13:20:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:51.619 13:20:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:51.619 13:20:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:51.619 13:20:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.619 13:20:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.619 13:20:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:51.619 [2024-11-17 13:20:40.724404] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:51.619 13:20:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.619 13:20:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:51.619 "name": "Existed_Raid", 00:10:51.619 "aliases": [ 00:10:51.619 "6689dbe2-8baa-4819-b58d-1a14b0a49c63" 00:10:51.619 ], 00:10:51.619 "product_name": "Raid Volume", 00:10:51.619 "block_size": 512, 00:10:51.619 "num_blocks": 253952, 00:10:51.619 "uuid": "6689dbe2-8baa-4819-b58d-1a14b0a49c63", 00:10:51.619 "assigned_rate_limits": { 00:10:51.619 "rw_ios_per_sec": 0, 00:10:51.619 "rw_mbytes_per_sec": 0, 00:10:51.619 "r_mbytes_per_sec": 0, 00:10:51.619 "w_mbytes_per_sec": 0 00:10:51.619 }, 00:10:51.619 "claimed": false, 00:10:51.619 "zoned": false, 00:10:51.619 "supported_io_types": { 00:10:51.619 "read": true, 00:10:51.619 "write": true, 00:10:51.619 "unmap": true, 00:10:51.619 "flush": true, 00:10:51.619 "reset": true, 00:10:51.619 "nvme_admin": false, 00:10:51.619 "nvme_io": false, 00:10:51.619 "nvme_io_md": false, 00:10:51.619 "write_zeroes": true, 00:10:51.619 "zcopy": false, 00:10:51.619 "get_zone_info": false, 00:10:51.619 "zone_management": false, 00:10:51.619 "zone_append": false, 00:10:51.619 "compare": false, 00:10:51.619 "compare_and_write": false, 00:10:51.619 "abort": false, 00:10:51.619 "seek_hole": false, 00:10:51.619 "seek_data": false, 00:10:51.619 "copy": false, 00:10:51.619 "nvme_iov_md": false 00:10:51.619 }, 00:10:51.619 "memory_domains": [ 00:10:51.619 { 00:10:51.619 "dma_device_id": "system", 00:10:51.619 "dma_device_type": 1 00:10:51.619 }, 00:10:51.619 { 00:10:51.619 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:51.619 "dma_device_type": 2 00:10:51.619 }, 00:10:51.619 { 00:10:51.619 "dma_device_id": "system", 00:10:51.619 "dma_device_type": 1 00:10:51.619 }, 00:10:51.619 { 00:10:51.619 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:51.619 "dma_device_type": 2 00:10:51.619 }, 00:10:51.619 { 00:10:51.619 "dma_device_id": "system", 00:10:51.619 "dma_device_type": 1 00:10:51.619 }, 00:10:51.619 { 00:10:51.619 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:51.619 "dma_device_type": 2 00:10:51.619 }, 00:10:51.619 { 00:10:51.619 "dma_device_id": "system", 00:10:51.619 "dma_device_type": 1 00:10:51.619 }, 00:10:51.619 { 00:10:51.619 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:51.619 "dma_device_type": 2 00:10:51.619 } 00:10:51.619 ], 00:10:51.619 "driver_specific": { 00:10:51.619 "raid": { 00:10:51.619 "uuid": "6689dbe2-8baa-4819-b58d-1a14b0a49c63", 00:10:51.619 "strip_size_kb": 64, 00:10:51.619 "state": "online", 00:10:51.619 "raid_level": "concat", 00:10:51.619 "superblock": true, 00:10:51.619 "num_base_bdevs": 4, 00:10:51.619 "num_base_bdevs_discovered": 4, 00:10:51.619 "num_base_bdevs_operational": 4, 00:10:51.619 "base_bdevs_list": [ 00:10:51.619 { 00:10:51.619 "name": "BaseBdev1", 00:10:51.619 "uuid": "878de6f2-cc2a-4c62-958b-99ef1a849d4e", 00:10:51.619 "is_configured": true, 00:10:51.619 "data_offset": 2048, 00:10:51.619 "data_size": 63488 00:10:51.619 }, 00:10:51.619 { 00:10:51.619 "name": "BaseBdev2", 00:10:51.619 "uuid": "92d906cb-18c4-4510-b1b0-145bda364b83", 00:10:51.619 "is_configured": true, 00:10:51.619 "data_offset": 2048, 00:10:51.619 "data_size": 63488 00:10:51.619 }, 00:10:51.619 { 00:10:51.619 "name": "BaseBdev3", 00:10:51.619 "uuid": "430d82f6-b6b1-412b-b039-ac4553bc2636", 00:10:51.619 "is_configured": true, 00:10:51.619 "data_offset": 2048, 00:10:51.619 "data_size": 63488 00:10:51.619 }, 00:10:51.619 { 00:10:51.619 "name": "BaseBdev4", 00:10:51.619 "uuid": "e82370c8-3ef8-4381-9367-d9b409d7819b", 00:10:51.619 "is_configured": true, 00:10:51.619 "data_offset": 2048, 00:10:51.619 "data_size": 63488 00:10:51.619 } 00:10:51.619 ] 00:10:51.619 } 00:10:51.619 } 00:10:51.619 }' 00:10:51.619 13:20:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:51.619 13:20:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:51.619 BaseBdev2 00:10:51.619 BaseBdev3 00:10:51.619 BaseBdev4' 00:10:51.619 13:20:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:51.880 13:20:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:51.880 13:20:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:51.880 13:20:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:51.880 13:20:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:51.880 13:20:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.880 13:20:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.880 13:20:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.880 13:20:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:51.880 13:20:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:51.880 13:20:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:51.880 13:20:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:51.880 13:20:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.880 13:20:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.880 13:20:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:51.880 13:20:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.880 13:20:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:51.880 13:20:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:51.880 13:20:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:51.880 13:20:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:51.880 13:20:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.880 13:20:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.880 13:20:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:51.880 13:20:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.880 13:20:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:51.880 13:20:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:51.880 13:20:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:51.880 13:20:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:51.880 13:20:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:51.880 13:20:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.880 13:20:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.880 13:20:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.880 13:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:51.880 13:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:51.880 13:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:51.880 13:20:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.880 13:20:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.880 [2024-11-17 13:20:41.051569] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:51.880 [2024-11-17 13:20:41.051636] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:51.880 [2024-11-17 13:20:41.051710] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:52.140 13:20:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.140 13:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:52.140 13:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:10:52.140 13:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:52.140 13:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:10:52.140 13:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:52.140 13:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:10:52.140 13:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:52.140 13:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:52.140 13:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:52.140 13:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:52.140 13:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:52.140 13:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:52.140 13:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:52.140 13:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:52.140 13:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:52.140 13:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:52.140 13:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:52.140 13:20:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.140 13:20:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.140 13:20:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.140 13:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:52.140 "name": "Existed_Raid", 00:10:52.140 "uuid": "6689dbe2-8baa-4819-b58d-1a14b0a49c63", 00:10:52.140 "strip_size_kb": 64, 00:10:52.140 "state": "offline", 00:10:52.140 "raid_level": "concat", 00:10:52.140 "superblock": true, 00:10:52.140 "num_base_bdevs": 4, 00:10:52.140 "num_base_bdevs_discovered": 3, 00:10:52.140 "num_base_bdevs_operational": 3, 00:10:52.140 "base_bdevs_list": [ 00:10:52.140 { 00:10:52.140 "name": null, 00:10:52.140 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:52.140 "is_configured": false, 00:10:52.140 "data_offset": 0, 00:10:52.140 "data_size": 63488 00:10:52.140 }, 00:10:52.140 { 00:10:52.140 "name": "BaseBdev2", 00:10:52.140 "uuid": "92d906cb-18c4-4510-b1b0-145bda364b83", 00:10:52.140 "is_configured": true, 00:10:52.140 "data_offset": 2048, 00:10:52.140 "data_size": 63488 00:10:52.140 }, 00:10:52.140 { 00:10:52.140 "name": "BaseBdev3", 00:10:52.140 "uuid": "430d82f6-b6b1-412b-b039-ac4553bc2636", 00:10:52.140 "is_configured": true, 00:10:52.140 "data_offset": 2048, 00:10:52.140 "data_size": 63488 00:10:52.140 }, 00:10:52.140 { 00:10:52.140 "name": "BaseBdev4", 00:10:52.140 "uuid": "e82370c8-3ef8-4381-9367-d9b409d7819b", 00:10:52.140 "is_configured": true, 00:10:52.140 "data_offset": 2048, 00:10:52.140 "data_size": 63488 00:10:52.140 } 00:10:52.140 ] 00:10:52.140 }' 00:10:52.140 13:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:52.140 13:20:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.400 13:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:52.400 13:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:52.400 13:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:52.400 13:20:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.400 13:20:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.400 13:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:52.400 13:20:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.660 13:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:52.660 13:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:52.660 13:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:52.660 13:20:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.660 13:20:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.660 [2024-11-17 13:20:41.636628] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:52.660 13:20:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.660 13:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:52.660 13:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:52.660 13:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:52.660 13:20:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.660 13:20:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.660 13:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:52.660 13:20:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.660 13:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:52.660 13:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:52.660 13:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:52.660 13:20:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.660 13:20:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.660 [2024-11-17 13:20:41.787640] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:52.660 13:20:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.660 13:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:52.660 13:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:52.921 13:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:52.921 13:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:52.921 13:20:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.921 13:20:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.921 13:20:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.921 13:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:52.921 13:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:52.921 13:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:52.921 13:20:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.921 13:20:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.921 [2024-11-17 13:20:41.940473] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:52.921 [2024-11-17 13:20:41.940574] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:52.921 13:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.921 13:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:52.921 13:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:52.921 13:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:52.921 13:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:52.921 13:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.921 13:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.921 13:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.921 13:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:52.921 13:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:52.921 13:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:52.921 13:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:52.921 13:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:52.921 13:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:52.921 13:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.921 13:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.921 BaseBdev2 00:10:52.921 13:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.921 13:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:52.921 13:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:52.921 13:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:52.921 13:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:52.921 13:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:52.921 13:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:52.921 13:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:52.921 13:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.921 13:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.921 13:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.921 13:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:52.921 13:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.921 13:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.182 [ 00:10:53.182 { 00:10:53.182 "name": "BaseBdev2", 00:10:53.182 "aliases": [ 00:10:53.182 "c0810c3d-7adc-41c2-a345-fbf42088be57" 00:10:53.182 ], 00:10:53.182 "product_name": "Malloc disk", 00:10:53.182 "block_size": 512, 00:10:53.182 "num_blocks": 65536, 00:10:53.182 "uuid": "c0810c3d-7adc-41c2-a345-fbf42088be57", 00:10:53.182 "assigned_rate_limits": { 00:10:53.182 "rw_ios_per_sec": 0, 00:10:53.183 "rw_mbytes_per_sec": 0, 00:10:53.183 "r_mbytes_per_sec": 0, 00:10:53.183 "w_mbytes_per_sec": 0 00:10:53.183 }, 00:10:53.183 "claimed": false, 00:10:53.183 "zoned": false, 00:10:53.183 "supported_io_types": { 00:10:53.183 "read": true, 00:10:53.183 "write": true, 00:10:53.183 "unmap": true, 00:10:53.183 "flush": true, 00:10:53.183 "reset": true, 00:10:53.183 "nvme_admin": false, 00:10:53.183 "nvme_io": false, 00:10:53.183 "nvme_io_md": false, 00:10:53.183 "write_zeroes": true, 00:10:53.183 "zcopy": true, 00:10:53.183 "get_zone_info": false, 00:10:53.183 "zone_management": false, 00:10:53.183 "zone_append": false, 00:10:53.183 "compare": false, 00:10:53.183 "compare_and_write": false, 00:10:53.183 "abort": true, 00:10:53.183 "seek_hole": false, 00:10:53.183 "seek_data": false, 00:10:53.183 "copy": true, 00:10:53.183 "nvme_iov_md": false 00:10:53.183 }, 00:10:53.183 "memory_domains": [ 00:10:53.183 { 00:10:53.183 "dma_device_id": "system", 00:10:53.183 "dma_device_type": 1 00:10:53.183 }, 00:10:53.183 { 00:10:53.183 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:53.183 "dma_device_type": 2 00:10:53.183 } 00:10:53.183 ], 00:10:53.183 "driver_specific": {} 00:10:53.183 } 00:10:53.183 ] 00:10:53.183 13:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.183 13:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:53.183 13:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:53.183 13:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:53.183 13:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:53.183 13:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.183 13:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.183 BaseBdev3 00:10:53.183 13:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.183 13:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:53.183 13:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:53.183 13:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:53.183 13:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:53.183 13:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:53.183 13:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:53.183 13:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:53.183 13:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.183 13:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.183 13:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.183 13:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:53.183 13:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.183 13:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.183 [ 00:10:53.183 { 00:10:53.183 "name": "BaseBdev3", 00:10:53.183 "aliases": [ 00:10:53.183 "709448c1-1ffb-40d6-a193-ec68e7c476e3" 00:10:53.183 ], 00:10:53.183 "product_name": "Malloc disk", 00:10:53.183 "block_size": 512, 00:10:53.183 "num_blocks": 65536, 00:10:53.183 "uuid": "709448c1-1ffb-40d6-a193-ec68e7c476e3", 00:10:53.183 "assigned_rate_limits": { 00:10:53.183 "rw_ios_per_sec": 0, 00:10:53.183 "rw_mbytes_per_sec": 0, 00:10:53.183 "r_mbytes_per_sec": 0, 00:10:53.183 "w_mbytes_per_sec": 0 00:10:53.183 }, 00:10:53.183 "claimed": false, 00:10:53.183 "zoned": false, 00:10:53.183 "supported_io_types": { 00:10:53.183 "read": true, 00:10:53.183 "write": true, 00:10:53.183 "unmap": true, 00:10:53.183 "flush": true, 00:10:53.183 "reset": true, 00:10:53.183 "nvme_admin": false, 00:10:53.183 "nvme_io": false, 00:10:53.183 "nvme_io_md": false, 00:10:53.183 "write_zeroes": true, 00:10:53.183 "zcopy": true, 00:10:53.183 "get_zone_info": false, 00:10:53.183 "zone_management": false, 00:10:53.183 "zone_append": false, 00:10:53.183 "compare": false, 00:10:53.183 "compare_and_write": false, 00:10:53.183 "abort": true, 00:10:53.183 "seek_hole": false, 00:10:53.183 "seek_data": false, 00:10:53.183 "copy": true, 00:10:53.183 "nvme_iov_md": false 00:10:53.183 }, 00:10:53.183 "memory_domains": [ 00:10:53.183 { 00:10:53.183 "dma_device_id": "system", 00:10:53.183 "dma_device_type": 1 00:10:53.183 }, 00:10:53.183 { 00:10:53.183 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:53.183 "dma_device_type": 2 00:10:53.183 } 00:10:53.183 ], 00:10:53.183 "driver_specific": {} 00:10:53.183 } 00:10:53.183 ] 00:10:53.183 13:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.183 13:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:53.183 13:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:53.183 13:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:53.183 13:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:53.183 13:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.183 13:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.183 BaseBdev4 00:10:53.183 13:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.183 13:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:10:53.183 13:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:53.183 13:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:53.183 13:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:53.183 13:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:53.183 13:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:53.183 13:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:53.183 13:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.183 13:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.183 13:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.183 13:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:53.183 13:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.183 13:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.183 [ 00:10:53.183 { 00:10:53.183 "name": "BaseBdev4", 00:10:53.183 "aliases": [ 00:10:53.183 "22b1b181-e436-48aa-b917-dfb1307e1017" 00:10:53.183 ], 00:10:53.183 "product_name": "Malloc disk", 00:10:53.183 "block_size": 512, 00:10:53.183 "num_blocks": 65536, 00:10:53.183 "uuid": "22b1b181-e436-48aa-b917-dfb1307e1017", 00:10:53.183 "assigned_rate_limits": { 00:10:53.183 "rw_ios_per_sec": 0, 00:10:53.183 "rw_mbytes_per_sec": 0, 00:10:53.183 "r_mbytes_per_sec": 0, 00:10:53.183 "w_mbytes_per_sec": 0 00:10:53.183 }, 00:10:53.183 "claimed": false, 00:10:53.183 "zoned": false, 00:10:53.183 "supported_io_types": { 00:10:53.183 "read": true, 00:10:53.183 "write": true, 00:10:53.183 "unmap": true, 00:10:53.183 "flush": true, 00:10:53.183 "reset": true, 00:10:53.183 "nvme_admin": false, 00:10:53.183 "nvme_io": false, 00:10:53.183 "nvme_io_md": false, 00:10:53.183 "write_zeroes": true, 00:10:53.183 "zcopy": true, 00:10:53.183 "get_zone_info": false, 00:10:53.183 "zone_management": false, 00:10:53.183 "zone_append": false, 00:10:53.183 "compare": false, 00:10:53.183 "compare_and_write": false, 00:10:53.183 "abort": true, 00:10:53.183 "seek_hole": false, 00:10:53.183 "seek_data": false, 00:10:53.183 "copy": true, 00:10:53.183 "nvme_iov_md": false 00:10:53.183 }, 00:10:53.183 "memory_domains": [ 00:10:53.183 { 00:10:53.183 "dma_device_id": "system", 00:10:53.183 "dma_device_type": 1 00:10:53.183 }, 00:10:53.183 { 00:10:53.183 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:53.183 "dma_device_type": 2 00:10:53.183 } 00:10:53.183 ], 00:10:53.183 "driver_specific": {} 00:10:53.183 } 00:10:53.183 ] 00:10:53.183 13:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.183 13:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:53.183 13:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:53.183 13:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:53.183 13:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:53.183 13:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.184 13:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.184 [2024-11-17 13:20:42.326806] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:53.184 [2024-11-17 13:20:42.326891] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:53.184 [2024-11-17 13:20:42.326933] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:53.184 [2024-11-17 13:20:42.328775] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:53.184 [2024-11-17 13:20:42.328877] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:53.184 13:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.184 13:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:53.184 13:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:53.184 13:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:53.184 13:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:53.184 13:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:53.184 13:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:53.184 13:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:53.184 13:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:53.184 13:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:53.184 13:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:53.184 13:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:53.184 13:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:53.184 13:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.184 13:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.184 13:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.184 13:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:53.184 "name": "Existed_Raid", 00:10:53.184 "uuid": "f192e971-65f9-4354-b1fd-5fe0dcb546eb", 00:10:53.184 "strip_size_kb": 64, 00:10:53.184 "state": "configuring", 00:10:53.184 "raid_level": "concat", 00:10:53.184 "superblock": true, 00:10:53.184 "num_base_bdevs": 4, 00:10:53.184 "num_base_bdevs_discovered": 3, 00:10:53.184 "num_base_bdevs_operational": 4, 00:10:53.184 "base_bdevs_list": [ 00:10:53.184 { 00:10:53.184 "name": "BaseBdev1", 00:10:53.184 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:53.184 "is_configured": false, 00:10:53.184 "data_offset": 0, 00:10:53.184 "data_size": 0 00:10:53.184 }, 00:10:53.184 { 00:10:53.184 "name": "BaseBdev2", 00:10:53.184 "uuid": "c0810c3d-7adc-41c2-a345-fbf42088be57", 00:10:53.184 "is_configured": true, 00:10:53.184 "data_offset": 2048, 00:10:53.184 "data_size": 63488 00:10:53.184 }, 00:10:53.184 { 00:10:53.184 "name": "BaseBdev3", 00:10:53.184 "uuid": "709448c1-1ffb-40d6-a193-ec68e7c476e3", 00:10:53.184 "is_configured": true, 00:10:53.184 "data_offset": 2048, 00:10:53.184 "data_size": 63488 00:10:53.184 }, 00:10:53.184 { 00:10:53.184 "name": "BaseBdev4", 00:10:53.184 "uuid": "22b1b181-e436-48aa-b917-dfb1307e1017", 00:10:53.184 "is_configured": true, 00:10:53.184 "data_offset": 2048, 00:10:53.184 "data_size": 63488 00:10:53.184 } 00:10:53.184 ] 00:10:53.184 }' 00:10:53.184 13:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:53.184 13:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.752 13:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:53.752 13:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.752 13:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.752 [2024-11-17 13:20:42.798021] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:53.752 13:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.752 13:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:53.752 13:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:53.752 13:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:53.752 13:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:53.752 13:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:53.752 13:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:53.753 13:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:53.753 13:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:53.753 13:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:53.753 13:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:53.753 13:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:53.753 13:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.753 13:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.753 13:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:53.753 13:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.753 13:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:53.753 "name": "Existed_Raid", 00:10:53.753 "uuid": "f192e971-65f9-4354-b1fd-5fe0dcb546eb", 00:10:53.753 "strip_size_kb": 64, 00:10:53.753 "state": "configuring", 00:10:53.753 "raid_level": "concat", 00:10:53.753 "superblock": true, 00:10:53.753 "num_base_bdevs": 4, 00:10:53.753 "num_base_bdevs_discovered": 2, 00:10:53.753 "num_base_bdevs_operational": 4, 00:10:53.753 "base_bdevs_list": [ 00:10:53.753 { 00:10:53.753 "name": "BaseBdev1", 00:10:53.753 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:53.753 "is_configured": false, 00:10:53.753 "data_offset": 0, 00:10:53.753 "data_size": 0 00:10:53.753 }, 00:10:53.753 { 00:10:53.753 "name": null, 00:10:53.753 "uuid": "c0810c3d-7adc-41c2-a345-fbf42088be57", 00:10:53.753 "is_configured": false, 00:10:53.753 "data_offset": 0, 00:10:53.753 "data_size": 63488 00:10:53.753 }, 00:10:53.753 { 00:10:53.753 "name": "BaseBdev3", 00:10:53.753 "uuid": "709448c1-1ffb-40d6-a193-ec68e7c476e3", 00:10:53.753 "is_configured": true, 00:10:53.753 "data_offset": 2048, 00:10:53.753 "data_size": 63488 00:10:53.753 }, 00:10:53.753 { 00:10:53.753 "name": "BaseBdev4", 00:10:53.753 "uuid": "22b1b181-e436-48aa-b917-dfb1307e1017", 00:10:53.753 "is_configured": true, 00:10:53.753 "data_offset": 2048, 00:10:53.753 "data_size": 63488 00:10:53.753 } 00:10:53.753 ] 00:10:53.753 }' 00:10:53.753 13:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:53.753 13:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.322 13:20:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:54.322 13:20:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:54.322 13:20:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.322 13:20:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.322 13:20:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.322 13:20:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:54.322 13:20:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:54.322 13:20:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.322 13:20:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.322 [2024-11-17 13:20:43.322654] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:54.322 BaseBdev1 00:10:54.322 13:20:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.322 13:20:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:54.322 13:20:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:54.322 13:20:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:54.322 13:20:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:54.322 13:20:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:54.322 13:20:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:54.322 13:20:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:54.322 13:20:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.322 13:20:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.322 13:20:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.322 13:20:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:54.322 13:20:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.322 13:20:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.322 [ 00:10:54.322 { 00:10:54.322 "name": "BaseBdev1", 00:10:54.322 "aliases": [ 00:10:54.322 "bfcab52b-0c17-4bdd-b99c-9b46e15e1679" 00:10:54.322 ], 00:10:54.322 "product_name": "Malloc disk", 00:10:54.322 "block_size": 512, 00:10:54.322 "num_blocks": 65536, 00:10:54.322 "uuid": "bfcab52b-0c17-4bdd-b99c-9b46e15e1679", 00:10:54.322 "assigned_rate_limits": { 00:10:54.322 "rw_ios_per_sec": 0, 00:10:54.322 "rw_mbytes_per_sec": 0, 00:10:54.322 "r_mbytes_per_sec": 0, 00:10:54.322 "w_mbytes_per_sec": 0 00:10:54.322 }, 00:10:54.322 "claimed": true, 00:10:54.322 "claim_type": "exclusive_write", 00:10:54.322 "zoned": false, 00:10:54.322 "supported_io_types": { 00:10:54.322 "read": true, 00:10:54.322 "write": true, 00:10:54.322 "unmap": true, 00:10:54.322 "flush": true, 00:10:54.322 "reset": true, 00:10:54.322 "nvme_admin": false, 00:10:54.322 "nvme_io": false, 00:10:54.322 "nvme_io_md": false, 00:10:54.322 "write_zeroes": true, 00:10:54.322 "zcopy": true, 00:10:54.322 "get_zone_info": false, 00:10:54.322 "zone_management": false, 00:10:54.322 "zone_append": false, 00:10:54.322 "compare": false, 00:10:54.322 "compare_and_write": false, 00:10:54.322 "abort": true, 00:10:54.322 "seek_hole": false, 00:10:54.322 "seek_data": false, 00:10:54.322 "copy": true, 00:10:54.322 "nvme_iov_md": false 00:10:54.322 }, 00:10:54.322 "memory_domains": [ 00:10:54.322 { 00:10:54.322 "dma_device_id": "system", 00:10:54.322 "dma_device_type": 1 00:10:54.322 }, 00:10:54.322 { 00:10:54.322 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:54.322 "dma_device_type": 2 00:10:54.322 } 00:10:54.322 ], 00:10:54.322 "driver_specific": {} 00:10:54.322 } 00:10:54.322 ] 00:10:54.322 13:20:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.322 13:20:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:54.322 13:20:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:54.322 13:20:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:54.322 13:20:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:54.322 13:20:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:54.322 13:20:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:54.322 13:20:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:54.322 13:20:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:54.323 13:20:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:54.323 13:20:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:54.323 13:20:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:54.323 13:20:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:54.323 13:20:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:54.323 13:20:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.323 13:20:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.323 13:20:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.323 13:20:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:54.323 "name": "Existed_Raid", 00:10:54.323 "uuid": "f192e971-65f9-4354-b1fd-5fe0dcb546eb", 00:10:54.323 "strip_size_kb": 64, 00:10:54.323 "state": "configuring", 00:10:54.323 "raid_level": "concat", 00:10:54.323 "superblock": true, 00:10:54.323 "num_base_bdevs": 4, 00:10:54.323 "num_base_bdevs_discovered": 3, 00:10:54.323 "num_base_bdevs_operational": 4, 00:10:54.323 "base_bdevs_list": [ 00:10:54.323 { 00:10:54.323 "name": "BaseBdev1", 00:10:54.323 "uuid": "bfcab52b-0c17-4bdd-b99c-9b46e15e1679", 00:10:54.323 "is_configured": true, 00:10:54.323 "data_offset": 2048, 00:10:54.323 "data_size": 63488 00:10:54.323 }, 00:10:54.323 { 00:10:54.323 "name": null, 00:10:54.323 "uuid": "c0810c3d-7adc-41c2-a345-fbf42088be57", 00:10:54.323 "is_configured": false, 00:10:54.323 "data_offset": 0, 00:10:54.323 "data_size": 63488 00:10:54.323 }, 00:10:54.323 { 00:10:54.323 "name": "BaseBdev3", 00:10:54.323 "uuid": "709448c1-1ffb-40d6-a193-ec68e7c476e3", 00:10:54.323 "is_configured": true, 00:10:54.323 "data_offset": 2048, 00:10:54.323 "data_size": 63488 00:10:54.323 }, 00:10:54.323 { 00:10:54.323 "name": "BaseBdev4", 00:10:54.323 "uuid": "22b1b181-e436-48aa-b917-dfb1307e1017", 00:10:54.323 "is_configured": true, 00:10:54.323 "data_offset": 2048, 00:10:54.323 "data_size": 63488 00:10:54.323 } 00:10:54.323 ] 00:10:54.323 }' 00:10:54.323 13:20:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:54.323 13:20:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.582 13:20:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:54.582 13:20:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:54.582 13:20:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.582 13:20:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.842 13:20:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.842 13:20:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:54.842 13:20:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:54.842 13:20:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.842 13:20:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.842 [2024-11-17 13:20:43.837898] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:54.842 13:20:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.842 13:20:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:54.842 13:20:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:54.842 13:20:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:54.842 13:20:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:54.842 13:20:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:54.842 13:20:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:54.842 13:20:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:54.842 13:20:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:54.842 13:20:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:54.842 13:20:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:54.842 13:20:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:54.842 13:20:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:54.842 13:20:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.842 13:20:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.842 13:20:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.842 13:20:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:54.842 "name": "Existed_Raid", 00:10:54.842 "uuid": "f192e971-65f9-4354-b1fd-5fe0dcb546eb", 00:10:54.842 "strip_size_kb": 64, 00:10:54.842 "state": "configuring", 00:10:54.842 "raid_level": "concat", 00:10:54.842 "superblock": true, 00:10:54.842 "num_base_bdevs": 4, 00:10:54.842 "num_base_bdevs_discovered": 2, 00:10:54.842 "num_base_bdevs_operational": 4, 00:10:54.842 "base_bdevs_list": [ 00:10:54.842 { 00:10:54.842 "name": "BaseBdev1", 00:10:54.842 "uuid": "bfcab52b-0c17-4bdd-b99c-9b46e15e1679", 00:10:54.842 "is_configured": true, 00:10:54.842 "data_offset": 2048, 00:10:54.842 "data_size": 63488 00:10:54.842 }, 00:10:54.842 { 00:10:54.842 "name": null, 00:10:54.842 "uuid": "c0810c3d-7adc-41c2-a345-fbf42088be57", 00:10:54.842 "is_configured": false, 00:10:54.842 "data_offset": 0, 00:10:54.842 "data_size": 63488 00:10:54.842 }, 00:10:54.842 { 00:10:54.842 "name": null, 00:10:54.842 "uuid": "709448c1-1ffb-40d6-a193-ec68e7c476e3", 00:10:54.842 "is_configured": false, 00:10:54.842 "data_offset": 0, 00:10:54.842 "data_size": 63488 00:10:54.842 }, 00:10:54.842 { 00:10:54.842 "name": "BaseBdev4", 00:10:54.842 "uuid": "22b1b181-e436-48aa-b917-dfb1307e1017", 00:10:54.842 "is_configured": true, 00:10:54.843 "data_offset": 2048, 00:10:54.843 "data_size": 63488 00:10:54.843 } 00:10:54.843 ] 00:10:54.843 }' 00:10:54.843 13:20:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:54.843 13:20:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.102 13:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:55.102 13:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:55.102 13:20:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.102 13:20:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.362 13:20:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.362 13:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:55.362 13:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:55.362 13:20:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.362 13:20:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.362 [2024-11-17 13:20:44.373027] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:55.362 13:20:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.362 13:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:55.362 13:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:55.362 13:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:55.362 13:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:55.362 13:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:55.362 13:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:55.362 13:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:55.362 13:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:55.362 13:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:55.362 13:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:55.362 13:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:55.362 13:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:55.362 13:20:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.362 13:20:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.362 13:20:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.362 13:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:55.362 "name": "Existed_Raid", 00:10:55.362 "uuid": "f192e971-65f9-4354-b1fd-5fe0dcb546eb", 00:10:55.362 "strip_size_kb": 64, 00:10:55.362 "state": "configuring", 00:10:55.362 "raid_level": "concat", 00:10:55.362 "superblock": true, 00:10:55.362 "num_base_bdevs": 4, 00:10:55.362 "num_base_bdevs_discovered": 3, 00:10:55.362 "num_base_bdevs_operational": 4, 00:10:55.362 "base_bdevs_list": [ 00:10:55.362 { 00:10:55.362 "name": "BaseBdev1", 00:10:55.362 "uuid": "bfcab52b-0c17-4bdd-b99c-9b46e15e1679", 00:10:55.362 "is_configured": true, 00:10:55.362 "data_offset": 2048, 00:10:55.362 "data_size": 63488 00:10:55.362 }, 00:10:55.362 { 00:10:55.362 "name": null, 00:10:55.362 "uuid": "c0810c3d-7adc-41c2-a345-fbf42088be57", 00:10:55.362 "is_configured": false, 00:10:55.362 "data_offset": 0, 00:10:55.362 "data_size": 63488 00:10:55.362 }, 00:10:55.362 { 00:10:55.362 "name": "BaseBdev3", 00:10:55.362 "uuid": "709448c1-1ffb-40d6-a193-ec68e7c476e3", 00:10:55.362 "is_configured": true, 00:10:55.362 "data_offset": 2048, 00:10:55.362 "data_size": 63488 00:10:55.362 }, 00:10:55.362 { 00:10:55.362 "name": "BaseBdev4", 00:10:55.362 "uuid": "22b1b181-e436-48aa-b917-dfb1307e1017", 00:10:55.362 "is_configured": true, 00:10:55.362 "data_offset": 2048, 00:10:55.362 "data_size": 63488 00:10:55.362 } 00:10:55.362 ] 00:10:55.362 }' 00:10:55.362 13:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:55.362 13:20:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.622 13:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:55.622 13:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:55.622 13:20:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.622 13:20:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.622 13:20:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.622 13:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:55.622 13:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:55.622 13:20:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.622 13:20:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.622 [2024-11-17 13:20:44.808341] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:55.883 13:20:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.883 13:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:55.883 13:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:55.883 13:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:55.883 13:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:55.883 13:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:55.883 13:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:55.883 13:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:55.883 13:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:55.883 13:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:55.883 13:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:55.883 13:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:55.883 13:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:55.883 13:20:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.883 13:20:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.883 13:20:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.883 13:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:55.883 "name": "Existed_Raid", 00:10:55.883 "uuid": "f192e971-65f9-4354-b1fd-5fe0dcb546eb", 00:10:55.883 "strip_size_kb": 64, 00:10:55.883 "state": "configuring", 00:10:55.883 "raid_level": "concat", 00:10:55.883 "superblock": true, 00:10:55.883 "num_base_bdevs": 4, 00:10:55.883 "num_base_bdevs_discovered": 2, 00:10:55.883 "num_base_bdevs_operational": 4, 00:10:55.883 "base_bdevs_list": [ 00:10:55.883 { 00:10:55.883 "name": null, 00:10:55.883 "uuid": "bfcab52b-0c17-4bdd-b99c-9b46e15e1679", 00:10:55.883 "is_configured": false, 00:10:55.883 "data_offset": 0, 00:10:55.883 "data_size": 63488 00:10:55.883 }, 00:10:55.883 { 00:10:55.883 "name": null, 00:10:55.883 "uuid": "c0810c3d-7adc-41c2-a345-fbf42088be57", 00:10:55.883 "is_configured": false, 00:10:55.883 "data_offset": 0, 00:10:55.883 "data_size": 63488 00:10:55.883 }, 00:10:55.883 { 00:10:55.883 "name": "BaseBdev3", 00:10:55.883 "uuid": "709448c1-1ffb-40d6-a193-ec68e7c476e3", 00:10:55.883 "is_configured": true, 00:10:55.883 "data_offset": 2048, 00:10:55.883 "data_size": 63488 00:10:55.883 }, 00:10:55.883 { 00:10:55.883 "name": "BaseBdev4", 00:10:55.883 "uuid": "22b1b181-e436-48aa-b917-dfb1307e1017", 00:10:55.883 "is_configured": true, 00:10:55.883 "data_offset": 2048, 00:10:55.883 "data_size": 63488 00:10:55.883 } 00:10:55.883 ] 00:10:55.883 }' 00:10:55.883 13:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:55.883 13:20:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.454 13:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:56.454 13:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:56.454 13:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.454 13:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.454 13:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.454 13:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:56.454 13:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:56.454 13:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.454 13:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.454 [2024-11-17 13:20:45.406981] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:56.454 13:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.454 13:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:56.454 13:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:56.454 13:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:56.454 13:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:56.454 13:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:56.454 13:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:56.454 13:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:56.454 13:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:56.454 13:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:56.454 13:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:56.454 13:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:56.454 13:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:56.454 13:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.454 13:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.454 13:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.454 13:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:56.454 "name": "Existed_Raid", 00:10:56.454 "uuid": "f192e971-65f9-4354-b1fd-5fe0dcb546eb", 00:10:56.454 "strip_size_kb": 64, 00:10:56.454 "state": "configuring", 00:10:56.454 "raid_level": "concat", 00:10:56.454 "superblock": true, 00:10:56.454 "num_base_bdevs": 4, 00:10:56.454 "num_base_bdevs_discovered": 3, 00:10:56.454 "num_base_bdevs_operational": 4, 00:10:56.454 "base_bdevs_list": [ 00:10:56.454 { 00:10:56.454 "name": null, 00:10:56.454 "uuid": "bfcab52b-0c17-4bdd-b99c-9b46e15e1679", 00:10:56.454 "is_configured": false, 00:10:56.454 "data_offset": 0, 00:10:56.454 "data_size": 63488 00:10:56.454 }, 00:10:56.454 { 00:10:56.454 "name": "BaseBdev2", 00:10:56.454 "uuid": "c0810c3d-7adc-41c2-a345-fbf42088be57", 00:10:56.454 "is_configured": true, 00:10:56.454 "data_offset": 2048, 00:10:56.454 "data_size": 63488 00:10:56.454 }, 00:10:56.454 { 00:10:56.454 "name": "BaseBdev3", 00:10:56.454 "uuid": "709448c1-1ffb-40d6-a193-ec68e7c476e3", 00:10:56.454 "is_configured": true, 00:10:56.454 "data_offset": 2048, 00:10:56.454 "data_size": 63488 00:10:56.454 }, 00:10:56.454 { 00:10:56.454 "name": "BaseBdev4", 00:10:56.454 "uuid": "22b1b181-e436-48aa-b917-dfb1307e1017", 00:10:56.454 "is_configured": true, 00:10:56.454 "data_offset": 2048, 00:10:56.454 "data_size": 63488 00:10:56.454 } 00:10:56.454 ] 00:10:56.454 }' 00:10:56.454 13:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:56.454 13:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.714 13:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:56.714 13:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.714 13:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.714 13:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:56.714 13:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.714 13:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:56.714 13:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:56.714 13:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:56.714 13:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.714 13:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.714 13:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.974 13:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u bfcab52b-0c17-4bdd-b99c-9b46e15e1679 00:10:56.974 13:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.974 13:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.974 [2024-11-17 13:20:45.977807] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:56.974 [2024-11-17 13:20:45.978025] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:56.974 [2024-11-17 13:20:45.978037] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:56.974 [2024-11-17 13:20:45.978333] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:10:56.974 [2024-11-17 13:20:45.978481] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:56.974 [2024-11-17 13:20:45.978509] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:56.974 [2024-11-17 13:20:45.978649] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:56.974 NewBaseBdev 00:10:56.974 13:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.974 13:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:56.974 13:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:10:56.974 13:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:56.974 13:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:56.974 13:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:56.974 13:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:56.974 13:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:56.974 13:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.974 13:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.974 13:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.974 13:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:56.974 13:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.974 13:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.974 [ 00:10:56.974 { 00:10:56.974 "name": "NewBaseBdev", 00:10:56.974 "aliases": [ 00:10:56.974 "bfcab52b-0c17-4bdd-b99c-9b46e15e1679" 00:10:56.974 ], 00:10:56.974 "product_name": "Malloc disk", 00:10:56.974 "block_size": 512, 00:10:56.974 "num_blocks": 65536, 00:10:56.974 "uuid": "bfcab52b-0c17-4bdd-b99c-9b46e15e1679", 00:10:56.974 "assigned_rate_limits": { 00:10:56.974 "rw_ios_per_sec": 0, 00:10:56.974 "rw_mbytes_per_sec": 0, 00:10:56.974 "r_mbytes_per_sec": 0, 00:10:56.974 "w_mbytes_per_sec": 0 00:10:56.974 }, 00:10:56.974 "claimed": true, 00:10:56.974 "claim_type": "exclusive_write", 00:10:56.974 "zoned": false, 00:10:56.974 "supported_io_types": { 00:10:56.974 "read": true, 00:10:56.974 "write": true, 00:10:56.974 "unmap": true, 00:10:56.974 "flush": true, 00:10:56.974 "reset": true, 00:10:56.974 "nvme_admin": false, 00:10:56.974 "nvme_io": false, 00:10:56.974 "nvme_io_md": false, 00:10:56.974 "write_zeroes": true, 00:10:56.974 "zcopy": true, 00:10:56.974 "get_zone_info": false, 00:10:56.974 "zone_management": false, 00:10:56.974 "zone_append": false, 00:10:56.974 "compare": false, 00:10:56.974 "compare_and_write": false, 00:10:56.974 "abort": true, 00:10:56.974 "seek_hole": false, 00:10:56.974 "seek_data": false, 00:10:56.974 "copy": true, 00:10:56.974 "nvme_iov_md": false 00:10:56.974 }, 00:10:56.974 "memory_domains": [ 00:10:56.974 { 00:10:56.974 "dma_device_id": "system", 00:10:56.974 "dma_device_type": 1 00:10:56.974 }, 00:10:56.974 { 00:10:56.974 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:56.974 "dma_device_type": 2 00:10:56.974 } 00:10:56.974 ], 00:10:56.974 "driver_specific": {} 00:10:56.974 } 00:10:56.974 ] 00:10:56.974 13:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.974 13:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:56.974 13:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:10:56.975 13:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:56.975 13:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:56.975 13:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:56.975 13:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:56.975 13:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:56.975 13:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:56.975 13:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:56.975 13:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:56.975 13:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:56.975 13:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:56.975 13:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.975 13:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.975 13:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:56.975 13:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.975 13:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:56.975 "name": "Existed_Raid", 00:10:56.975 "uuid": "f192e971-65f9-4354-b1fd-5fe0dcb546eb", 00:10:56.975 "strip_size_kb": 64, 00:10:56.975 "state": "online", 00:10:56.975 "raid_level": "concat", 00:10:56.975 "superblock": true, 00:10:56.975 "num_base_bdevs": 4, 00:10:56.975 "num_base_bdevs_discovered": 4, 00:10:56.975 "num_base_bdevs_operational": 4, 00:10:56.975 "base_bdevs_list": [ 00:10:56.975 { 00:10:56.975 "name": "NewBaseBdev", 00:10:56.975 "uuid": "bfcab52b-0c17-4bdd-b99c-9b46e15e1679", 00:10:56.975 "is_configured": true, 00:10:56.975 "data_offset": 2048, 00:10:56.975 "data_size": 63488 00:10:56.975 }, 00:10:56.975 { 00:10:56.975 "name": "BaseBdev2", 00:10:56.975 "uuid": "c0810c3d-7adc-41c2-a345-fbf42088be57", 00:10:56.975 "is_configured": true, 00:10:56.975 "data_offset": 2048, 00:10:56.975 "data_size": 63488 00:10:56.975 }, 00:10:56.975 { 00:10:56.975 "name": "BaseBdev3", 00:10:56.975 "uuid": "709448c1-1ffb-40d6-a193-ec68e7c476e3", 00:10:56.975 "is_configured": true, 00:10:56.975 "data_offset": 2048, 00:10:56.975 "data_size": 63488 00:10:56.975 }, 00:10:56.975 { 00:10:56.975 "name": "BaseBdev4", 00:10:56.975 "uuid": "22b1b181-e436-48aa-b917-dfb1307e1017", 00:10:56.975 "is_configured": true, 00:10:56.975 "data_offset": 2048, 00:10:56.975 "data_size": 63488 00:10:56.975 } 00:10:56.975 ] 00:10:56.975 }' 00:10:56.975 13:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:56.975 13:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.235 13:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:57.235 13:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:57.235 13:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:57.235 13:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:57.235 13:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:57.235 13:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:57.235 13:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:57.235 13:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.235 13:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.235 13:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:57.235 [2024-11-17 13:20:46.393561] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:57.235 13:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.235 13:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:57.235 "name": "Existed_Raid", 00:10:57.235 "aliases": [ 00:10:57.235 "f192e971-65f9-4354-b1fd-5fe0dcb546eb" 00:10:57.235 ], 00:10:57.235 "product_name": "Raid Volume", 00:10:57.235 "block_size": 512, 00:10:57.235 "num_blocks": 253952, 00:10:57.235 "uuid": "f192e971-65f9-4354-b1fd-5fe0dcb546eb", 00:10:57.235 "assigned_rate_limits": { 00:10:57.235 "rw_ios_per_sec": 0, 00:10:57.235 "rw_mbytes_per_sec": 0, 00:10:57.236 "r_mbytes_per_sec": 0, 00:10:57.236 "w_mbytes_per_sec": 0 00:10:57.236 }, 00:10:57.236 "claimed": false, 00:10:57.236 "zoned": false, 00:10:57.236 "supported_io_types": { 00:10:57.236 "read": true, 00:10:57.236 "write": true, 00:10:57.236 "unmap": true, 00:10:57.236 "flush": true, 00:10:57.236 "reset": true, 00:10:57.236 "nvme_admin": false, 00:10:57.236 "nvme_io": false, 00:10:57.236 "nvme_io_md": false, 00:10:57.236 "write_zeroes": true, 00:10:57.236 "zcopy": false, 00:10:57.236 "get_zone_info": false, 00:10:57.236 "zone_management": false, 00:10:57.236 "zone_append": false, 00:10:57.236 "compare": false, 00:10:57.236 "compare_and_write": false, 00:10:57.236 "abort": false, 00:10:57.236 "seek_hole": false, 00:10:57.236 "seek_data": false, 00:10:57.236 "copy": false, 00:10:57.236 "nvme_iov_md": false 00:10:57.236 }, 00:10:57.236 "memory_domains": [ 00:10:57.236 { 00:10:57.236 "dma_device_id": "system", 00:10:57.236 "dma_device_type": 1 00:10:57.236 }, 00:10:57.236 { 00:10:57.236 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:57.236 "dma_device_type": 2 00:10:57.236 }, 00:10:57.236 { 00:10:57.236 "dma_device_id": "system", 00:10:57.236 "dma_device_type": 1 00:10:57.236 }, 00:10:57.236 { 00:10:57.236 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:57.236 "dma_device_type": 2 00:10:57.236 }, 00:10:57.236 { 00:10:57.236 "dma_device_id": "system", 00:10:57.236 "dma_device_type": 1 00:10:57.236 }, 00:10:57.236 { 00:10:57.236 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:57.236 "dma_device_type": 2 00:10:57.236 }, 00:10:57.236 { 00:10:57.236 "dma_device_id": "system", 00:10:57.236 "dma_device_type": 1 00:10:57.236 }, 00:10:57.236 { 00:10:57.236 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:57.236 "dma_device_type": 2 00:10:57.236 } 00:10:57.236 ], 00:10:57.236 "driver_specific": { 00:10:57.236 "raid": { 00:10:57.236 "uuid": "f192e971-65f9-4354-b1fd-5fe0dcb546eb", 00:10:57.236 "strip_size_kb": 64, 00:10:57.236 "state": "online", 00:10:57.236 "raid_level": "concat", 00:10:57.236 "superblock": true, 00:10:57.236 "num_base_bdevs": 4, 00:10:57.236 "num_base_bdevs_discovered": 4, 00:10:57.236 "num_base_bdevs_operational": 4, 00:10:57.236 "base_bdevs_list": [ 00:10:57.236 { 00:10:57.236 "name": "NewBaseBdev", 00:10:57.236 "uuid": "bfcab52b-0c17-4bdd-b99c-9b46e15e1679", 00:10:57.236 "is_configured": true, 00:10:57.236 "data_offset": 2048, 00:10:57.236 "data_size": 63488 00:10:57.236 }, 00:10:57.236 { 00:10:57.236 "name": "BaseBdev2", 00:10:57.236 "uuid": "c0810c3d-7adc-41c2-a345-fbf42088be57", 00:10:57.236 "is_configured": true, 00:10:57.236 "data_offset": 2048, 00:10:57.236 "data_size": 63488 00:10:57.236 }, 00:10:57.236 { 00:10:57.236 "name": "BaseBdev3", 00:10:57.236 "uuid": "709448c1-1ffb-40d6-a193-ec68e7c476e3", 00:10:57.236 "is_configured": true, 00:10:57.236 "data_offset": 2048, 00:10:57.236 "data_size": 63488 00:10:57.236 }, 00:10:57.236 { 00:10:57.236 "name": "BaseBdev4", 00:10:57.236 "uuid": "22b1b181-e436-48aa-b917-dfb1307e1017", 00:10:57.236 "is_configured": true, 00:10:57.236 "data_offset": 2048, 00:10:57.236 "data_size": 63488 00:10:57.236 } 00:10:57.236 ] 00:10:57.236 } 00:10:57.236 } 00:10:57.236 }' 00:10:57.236 13:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:57.497 13:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:57.497 BaseBdev2 00:10:57.497 BaseBdev3 00:10:57.497 BaseBdev4' 00:10:57.497 13:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:57.497 13:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:57.497 13:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:57.497 13:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:57.497 13:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.497 13:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.497 13:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:57.497 13:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.497 13:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:57.497 13:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:57.497 13:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:57.497 13:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:57.497 13:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.497 13:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.497 13:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:57.497 13:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.497 13:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:57.497 13:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:57.497 13:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:57.497 13:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:57.497 13:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:57.497 13:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.497 13:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.497 13:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.497 13:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:57.497 13:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:57.497 13:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:57.497 13:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:57.497 13:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.497 13:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.497 13:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:57.497 13:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.758 13:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:57.758 13:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:57.758 13:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:57.758 13:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.758 13:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.758 [2024-11-17 13:20:46.744602] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:57.758 [2024-11-17 13:20:46.744640] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:57.758 [2024-11-17 13:20:46.744722] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:57.758 [2024-11-17 13:20:46.744788] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:57.758 [2024-11-17 13:20:46.744805] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:57.758 13:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.758 13:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 71864 00:10:57.758 13:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 71864 ']' 00:10:57.758 13:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 71864 00:10:57.758 13:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:10:57.758 13:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:57.758 13:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71864 00:10:57.758 killing process with pid 71864 00:10:57.758 13:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:57.758 13:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:57.758 13:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71864' 00:10:57.758 13:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 71864 00:10:57.758 [2024-11-17 13:20:46.784297] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:57.758 13:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 71864 00:10:58.018 [2024-11-17 13:20:47.167560] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:59.397 ************************************ 00:10:59.397 END TEST raid_state_function_test_sb 00:10:59.397 ************************************ 00:10:59.397 13:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:10:59.397 00:10:59.397 real 0m11.472s 00:10:59.397 user 0m18.178s 00:10:59.397 sys 0m2.074s 00:10:59.397 13:20:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:59.397 13:20:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.397 13:20:48 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:10:59.397 13:20:48 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:59.397 13:20:48 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:59.397 13:20:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:59.397 ************************************ 00:10:59.397 START TEST raid_superblock_test 00:10:59.397 ************************************ 00:10:59.397 13:20:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 4 00:10:59.397 13:20:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:10:59.397 13:20:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:10:59.397 13:20:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:10:59.397 13:20:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:10:59.397 13:20:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:10:59.397 13:20:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:10:59.397 13:20:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:10:59.397 13:20:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:10:59.397 13:20:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:10:59.397 13:20:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:10:59.397 13:20:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:10:59.397 13:20:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:10:59.397 13:20:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:10:59.397 13:20:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:10:59.397 13:20:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:10:59.397 13:20:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:10:59.397 13:20:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=72534 00:10:59.397 13:20:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 72534 00:10:59.397 13:20:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:10:59.397 13:20:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 72534 ']' 00:10:59.397 13:20:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:59.397 13:20:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:59.397 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:59.397 13:20:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:59.397 13:20:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:59.397 13:20:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.397 [2024-11-17 13:20:48.423625] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:10:59.397 [2024-11-17 13:20:48.423816] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72534 ] 00:10:59.655 [2024-11-17 13:20:48.625440] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:59.655 [2024-11-17 13:20:48.742953] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:59.914 [2024-11-17 13:20:48.923823] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:59.914 [2024-11-17 13:20:48.923885] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:00.171 13:20:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:00.171 13:20:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:11:00.171 13:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:11:00.171 13:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:00.171 13:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:11:00.171 13:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:11:00.171 13:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:11:00.171 13:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:00.171 13:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:00.171 13:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:00.171 13:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:11:00.171 13:20:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.171 13:20:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.171 malloc1 00:11:00.171 13:20:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.171 13:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:00.171 13:20:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.171 13:20:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.171 [2024-11-17 13:20:49.301300] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:00.171 [2024-11-17 13:20:49.301371] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:00.171 [2024-11-17 13:20:49.301414] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:00.171 [2024-11-17 13:20:49.301424] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:00.171 [2024-11-17 13:20:49.303740] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:00.171 [2024-11-17 13:20:49.303779] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:00.171 pt1 00:11:00.172 13:20:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.172 13:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:00.172 13:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:00.172 13:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:11:00.172 13:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:11:00.172 13:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:11:00.172 13:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:00.172 13:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:00.172 13:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:00.172 13:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:11:00.172 13:20:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.172 13:20:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.172 malloc2 00:11:00.172 13:20:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.172 13:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:00.172 13:20:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.172 13:20:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.172 [2024-11-17 13:20:49.354036] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:00.172 [2024-11-17 13:20:49.354095] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:00.172 [2024-11-17 13:20:49.354134] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:00.172 [2024-11-17 13:20:49.354145] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:00.172 [2024-11-17 13:20:49.356363] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:00.172 [2024-11-17 13:20:49.356398] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:00.172 pt2 00:11:00.172 13:20:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.172 13:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:00.172 13:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:00.172 13:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:11:00.172 13:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:11:00.172 13:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:11:00.172 13:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:00.172 13:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:00.172 13:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:00.172 13:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:11:00.172 13:20:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.172 13:20:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.431 malloc3 00:11:00.431 13:20:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.431 13:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:00.431 13:20:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.431 13:20:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.431 [2024-11-17 13:20:49.419169] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:00.431 [2024-11-17 13:20:49.419232] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:00.431 [2024-11-17 13:20:49.419254] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:11:00.431 [2024-11-17 13:20:49.419264] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:00.431 [2024-11-17 13:20:49.421469] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:00.431 [2024-11-17 13:20:49.421504] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:00.431 pt3 00:11:00.431 13:20:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.431 13:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:00.431 13:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:00.431 13:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:11:00.431 13:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:11:00.431 13:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:11:00.431 13:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:00.431 13:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:00.431 13:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:00.431 13:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:11:00.431 13:20:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.431 13:20:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.431 malloc4 00:11:00.431 13:20:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.431 13:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:00.431 13:20:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.431 13:20:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.431 [2024-11-17 13:20:49.474975] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:00.431 [2024-11-17 13:20:49.475027] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:00.431 [2024-11-17 13:20:49.475045] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:11:00.431 [2024-11-17 13:20:49.475055] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:00.431 [2024-11-17 13:20:49.477315] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:00.431 [2024-11-17 13:20:49.477351] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:00.431 pt4 00:11:00.431 13:20:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.431 13:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:00.431 13:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:00.431 13:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:11:00.431 13:20:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.431 13:20:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.431 [2024-11-17 13:20:49.487002] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:00.431 [2024-11-17 13:20:49.488984] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:00.431 [2024-11-17 13:20:49.489050] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:00.431 [2024-11-17 13:20:49.489151] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:00.431 [2024-11-17 13:20:49.489452] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:00.431 [2024-11-17 13:20:49.489474] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:00.431 [2024-11-17 13:20:49.489755] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:00.431 [2024-11-17 13:20:49.489946] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:00.431 [2024-11-17 13:20:49.489969] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:11:00.431 [2024-11-17 13:20:49.490133] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:00.431 13:20:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.431 13:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:00.431 13:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:00.431 13:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:00.431 13:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:00.431 13:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:00.431 13:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:00.431 13:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:00.431 13:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:00.431 13:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:00.431 13:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:00.431 13:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.431 13:20:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.431 13:20:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.431 13:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:00.431 13:20:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.431 13:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:00.431 "name": "raid_bdev1", 00:11:00.431 "uuid": "e1c6e498-9c18-4021-953a-0bb34d3761f3", 00:11:00.431 "strip_size_kb": 64, 00:11:00.431 "state": "online", 00:11:00.431 "raid_level": "concat", 00:11:00.431 "superblock": true, 00:11:00.431 "num_base_bdevs": 4, 00:11:00.431 "num_base_bdevs_discovered": 4, 00:11:00.431 "num_base_bdevs_operational": 4, 00:11:00.431 "base_bdevs_list": [ 00:11:00.431 { 00:11:00.431 "name": "pt1", 00:11:00.431 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:00.431 "is_configured": true, 00:11:00.431 "data_offset": 2048, 00:11:00.431 "data_size": 63488 00:11:00.431 }, 00:11:00.431 { 00:11:00.431 "name": "pt2", 00:11:00.431 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:00.431 "is_configured": true, 00:11:00.431 "data_offset": 2048, 00:11:00.431 "data_size": 63488 00:11:00.431 }, 00:11:00.431 { 00:11:00.431 "name": "pt3", 00:11:00.431 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:00.431 "is_configured": true, 00:11:00.431 "data_offset": 2048, 00:11:00.431 "data_size": 63488 00:11:00.431 }, 00:11:00.431 { 00:11:00.431 "name": "pt4", 00:11:00.431 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:00.431 "is_configured": true, 00:11:00.431 "data_offset": 2048, 00:11:00.431 "data_size": 63488 00:11:00.431 } 00:11:00.431 ] 00:11:00.431 }' 00:11:00.431 13:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:00.431 13:20:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.021 13:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:11:01.021 13:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:01.021 13:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:01.021 13:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:01.021 13:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:01.021 13:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:01.021 13:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:01.021 13:20:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.021 13:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:01.021 13:20:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.021 [2024-11-17 13:20:49.950567] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:01.021 13:20:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.021 13:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:01.021 "name": "raid_bdev1", 00:11:01.021 "aliases": [ 00:11:01.021 "e1c6e498-9c18-4021-953a-0bb34d3761f3" 00:11:01.021 ], 00:11:01.021 "product_name": "Raid Volume", 00:11:01.021 "block_size": 512, 00:11:01.021 "num_blocks": 253952, 00:11:01.021 "uuid": "e1c6e498-9c18-4021-953a-0bb34d3761f3", 00:11:01.021 "assigned_rate_limits": { 00:11:01.021 "rw_ios_per_sec": 0, 00:11:01.021 "rw_mbytes_per_sec": 0, 00:11:01.021 "r_mbytes_per_sec": 0, 00:11:01.021 "w_mbytes_per_sec": 0 00:11:01.021 }, 00:11:01.021 "claimed": false, 00:11:01.021 "zoned": false, 00:11:01.021 "supported_io_types": { 00:11:01.021 "read": true, 00:11:01.021 "write": true, 00:11:01.021 "unmap": true, 00:11:01.021 "flush": true, 00:11:01.021 "reset": true, 00:11:01.021 "nvme_admin": false, 00:11:01.021 "nvme_io": false, 00:11:01.021 "nvme_io_md": false, 00:11:01.021 "write_zeroes": true, 00:11:01.021 "zcopy": false, 00:11:01.021 "get_zone_info": false, 00:11:01.021 "zone_management": false, 00:11:01.021 "zone_append": false, 00:11:01.021 "compare": false, 00:11:01.021 "compare_and_write": false, 00:11:01.021 "abort": false, 00:11:01.021 "seek_hole": false, 00:11:01.021 "seek_data": false, 00:11:01.021 "copy": false, 00:11:01.021 "nvme_iov_md": false 00:11:01.021 }, 00:11:01.021 "memory_domains": [ 00:11:01.021 { 00:11:01.021 "dma_device_id": "system", 00:11:01.021 "dma_device_type": 1 00:11:01.021 }, 00:11:01.021 { 00:11:01.021 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:01.021 "dma_device_type": 2 00:11:01.021 }, 00:11:01.021 { 00:11:01.021 "dma_device_id": "system", 00:11:01.021 "dma_device_type": 1 00:11:01.021 }, 00:11:01.021 { 00:11:01.021 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:01.021 "dma_device_type": 2 00:11:01.021 }, 00:11:01.021 { 00:11:01.021 "dma_device_id": "system", 00:11:01.021 "dma_device_type": 1 00:11:01.021 }, 00:11:01.021 { 00:11:01.021 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:01.021 "dma_device_type": 2 00:11:01.021 }, 00:11:01.021 { 00:11:01.021 "dma_device_id": "system", 00:11:01.021 "dma_device_type": 1 00:11:01.021 }, 00:11:01.021 { 00:11:01.021 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:01.021 "dma_device_type": 2 00:11:01.021 } 00:11:01.021 ], 00:11:01.021 "driver_specific": { 00:11:01.021 "raid": { 00:11:01.021 "uuid": "e1c6e498-9c18-4021-953a-0bb34d3761f3", 00:11:01.021 "strip_size_kb": 64, 00:11:01.021 "state": "online", 00:11:01.021 "raid_level": "concat", 00:11:01.021 "superblock": true, 00:11:01.021 "num_base_bdevs": 4, 00:11:01.021 "num_base_bdevs_discovered": 4, 00:11:01.021 "num_base_bdevs_operational": 4, 00:11:01.021 "base_bdevs_list": [ 00:11:01.021 { 00:11:01.021 "name": "pt1", 00:11:01.021 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:01.021 "is_configured": true, 00:11:01.021 "data_offset": 2048, 00:11:01.021 "data_size": 63488 00:11:01.021 }, 00:11:01.021 { 00:11:01.021 "name": "pt2", 00:11:01.021 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:01.021 "is_configured": true, 00:11:01.021 "data_offset": 2048, 00:11:01.021 "data_size": 63488 00:11:01.021 }, 00:11:01.021 { 00:11:01.021 "name": "pt3", 00:11:01.021 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:01.021 "is_configured": true, 00:11:01.021 "data_offset": 2048, 00:11:01.021 "data_size": 63488 00:11:01.021 }, 00:11:01.021 { 00:11:01.021 "name": "pt4", 00:11:01.021 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:01.021 "is_configured": true, 00:11:01.021 "data_offset": 2048, 00:11:01.021 "data_size": 63488 00:11:01.021 } 00:11:01.021 ] 00:11:01.021 } 00:11:01.021 } 00:11:01.021 }' 00:11:01.021 13:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:01.021 13:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:01.021 pt2 00:11:01.021 pt3 00:11:01.021 pt4' 00:11:01.021 13:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:01.021 13:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:01.021 13:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:01.021 13:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:01.021 13:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:01.021 13:20:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.021 13:20:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.021 13:20:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.021 13:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:01.021 13:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:01.021 13:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:01.021 13:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:01.021 13:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:01.021 13:20:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.021 13:20:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.021 13:20:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.021 13:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:01.021 13:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:01.021 13:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:01.021 13:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:01.021 13:20:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.021 13:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:01.021 13:20:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.021 13:20:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.021 13:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:01.021 13:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:01.021 13:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:01.021 13:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:01.021 13:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:11:01.021 13:20:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.021 13:20:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.021 13:20:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.280 13:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:01.280 13:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:01.280 13:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:01.280 13:20:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.280 13:20:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.280 13:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:11:01.280 [2024-11-17 13:20:50.257971] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:01.280 13:20:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.280 13:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=e1c6e498-9c18-4021-953a-0bb34d3761f3 00:11:01.280 13:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z e1c6e498-9c18-4021-953a-0bb34d3761f3 ']' 00:11:01.280 13:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:01.280 13:20:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.280 13:20:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.280 [2024-11-17 13:20:50.301555] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:01.280 [2024-11-17 13:20:50.301582] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:01.280 [2024-11-17 13:20:50.301654] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:01.280 [2024-11-17 13:20:50.301723] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:01.280 [2024-11-17 13:20:50.301738] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:11:01.280 13:20:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.280 13:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:01.280 13:20:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.280 13:20:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.280 13:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:11:01.280 13:20:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.280 13:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:11:01.280 13:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:11:01.280 13:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:01.280 13:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:11:01.280 13:20:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.280 13:20:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.280 13:20:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.280 13:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:01.280 13:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:11:01.280 13:20:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.280 13:20:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.280 13:20:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.280 13:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:01.280 13:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:11:01.280 13:20:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.280 13:20:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.280 13:20:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.280 13:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:01.280 13:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:11:01.280 13:20:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.280 13:20:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.280 13:20:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.280 13:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:11:01.280 13:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:11:01.280 13:20:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.280 13:20:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.280 13:20:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.281 13:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:11:01.281 13:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:01.281 13:20:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:11:01.281 13:20:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:01.281 13:20:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:11:01.281 13:20:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:01.281 13:20:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:11:01.281 13:20:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:01.281 13:20:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:01.281 13:20:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.281 13:20:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.281 [2024-11-17 13:20:50.441342] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:11:01.281 [2024-11-17 13:20:50.443184] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:11:01.281 [2024-11-17 13:20:50.443246] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:11:01.281 [2024-11-17 13:20:50.443281] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:11:01.281 [2024-11-17 13:20:50.443331] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:11:01.281 [2024-11-17 13:20:50.443376] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:11:01.281 [2024-11-17 13:20:50.443419] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:11:01.281 [2024-11-17 13:20:50.443440] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:11:01.281 [2024-11-17 13:20:50.443455] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:01.281 [2024-11-17 13:20:50.443508] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:11:01.281 request: 00:11:01.281 { 00:11:01.281 "name": "raid_bdev1", 00:11:01.281 "raid_level": "concat", 00:11:01.281 "base_bdevs": [ 00:11:01.281 "malloc1", 00:11:01.281 "malloc2", 00:11:01.281 "malloc3", 00:11:01.281 "malloc4" 00:11:01.281 ], 00:11:01.281 "strip_size_kb": 64, 00:11:01.281 "superblock": false, 00:11:01.281 "method": "bdev_raid_create", 00:11:01.281 "req_id": 1 00:11:01.281 } 00:11:01.281 Got JSON-RPC error response 00:11:01.281 response: 00:11:01.281 { 00:11:01.281 "code": -17, 00:11:01.281 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:11:01.281 } 00:11:01.281 13:20:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:11:01.281 13:20:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:11:01.281 13:20:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:01.281 13:20:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:01.281 13:20:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:01.281 13:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:01.281 13:20:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.281 13:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:11:01.281 13:20:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.281 13:20:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.281 13:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:11:01.281 13:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:11:01.281 13:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:01.281 13:20:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.281 13:20:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.281 [2024-11-17 13:20:50.501281] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:01.281 [2024-11-17 13:20:50.501333] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:01.281 [2024-11-17 13:20:50.501350] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:01.281 [2024-11-17 13:20:50.501362] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:01.539 [2024-11-17 13:20:50.503619] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:01.539 [2024-11-17 13:20:50.503678] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:01.539 [2024-11-17 13:20:50.503751] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:01.539 [2024-11-17 13:20:50.503827] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:01.539 pt1 00:11:01.539 13:20:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.540 13:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:11:01.540 13:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:01.540 13:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:01.540 13:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:01.540 13:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:01.540 13:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:01.540 13:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:01.540 13:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:01.540 13:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:01.540 13:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:01.540 13:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:01.540 13:20:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.540 13:20:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.540 13:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:01.540 13:20:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.540 13:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:01.540 "name": "raid_bdev1", 00:11:01.540 "uuid": "e1c6e498-9c18-4021-953a-0bb34d3761f3", 00:11:01.540 "strip_size_kb": 64, 00:11:01.540 "state": "configuring", 00:11:01.540 "raid_level": "concat", 00:11:01.540 "superblock": true, 00:11:01.540 "num_base_bdevs": 4, 00:11:01.540 "num_base_bdevs_discovered": 1, 00:11:01.540 "num_base_bdevs_operational": 4, 00:11:01.540 "base_bdevs_list": [ 00:11:01.540 { 00:11:01.540 "name": "pt1", 00:11:01.540 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:01.540 "is_configured": true, 00:11:01.540 "data_offset": 2048, 00:11:01.540 "data_size": 63488 00:11:01.540 }, 00:11:01.540 { 00:11:01.540 "name": null, 00:11:01.540 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:01.540 "is_configured": false, 00:11:01.540 "data_offset": 2048, 00:11:01.540 "data_size": 63488 00:11:01.540 }, 00:11:01.540 { 00:11:01.540 "name": null, 00:11:01.540 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:01.540 "is_configured": false, 00:11:01.540 "data_offset": 2048, 00:11:01.540 "data_size": 63488 00:11:01.540 }, 00:11:01.540 { 00:11:01.540 "name": null, 00:11:01.540 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:01.540 "is_configured": false, 00:11:01.540 "data_offset": 2048, 00:11:01.540 "data_size": 63488 00:11:01.540 } 00:11:01.540 ] 00:11:01.540 }' 00:11:01.540 13:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:01.540 13:20:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.798 13:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:11:01.798 13:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:01.798 13:20:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.798 13:20:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.798 [2024-11-17 13:20:50.928607] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:01.798 [2024-11-17 13:20:50.928682] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:01.798 [2024-11-17 13:20:50.928704] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:11:01.798 [2024-11-17 13:20:50.928718] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:01.798 [2024-11-17 13:20:50.929196] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:01.798 [2024-11-17 13:20:50.929239] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:01.798 [2024-11-17 13:20:50.929327] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:01.798 [2024-11-17 13:20:50.929374] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:01.798 pt2 00:11:01.798 13:20:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.798 13:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:11:01.798 13:20:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.798 13:20:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.798 [2024-11-17 13:20:50.936598] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:11:01.798 13:20:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.798 13:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:11:01.798 13:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:01.798 13:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:01.798 13:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:01.798 13:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:01.798 13:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:01.798 13:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:01.798 13:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:01.798 13:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:01.798 13:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:01.798 13:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:01.798 13:20:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.798 13:20:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.798 13:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:01.798 13:20:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.798 13:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:01.798 "name": "raid_bdev1", 00:11:01.798 "uuid": "e1c6e498-9c18-4021-953a-0bb34d3761f3", 00:11:01.798 "strip_size_kb": 64, 00:11:01.798 "state": "configuring", 00:11:01.798 "raid_level": "concat", 00:11:01.798 "superblock": true, 00:11:01.798 "num_base_bdevs": 4, 00:11:01.798 "num_base_bdevs_discovered": 1, 00:11:01.798 "num_base_bdevs_operational": 4, 00:11:01.798 "base_bdevs_list": [ 00:11:01.798 { 00:11:01.798 "name": "pt1", 00:11:01.798 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:01.798 "is_configured": true, 00:11:01.798 "data_offset": 2048, 00:11:01.798 "data_size": 63488 00:11:01.798 }, 00:11:01.798 { 00:11:01.798 "name": null, 00:11:01.798 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:01.798 "is_configured": false, 00:11:01.798 "data_offset": 0, 00:11:01.798 "data_size": 63488 00:11:01.798 }, 00:11:01.798 { 00:11:01.798 "name": null, 00:11:01.798 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:01.798 "is_configured": false, 00:11:01.798 "data_offset": 2048, 00:11:01.798 "data_size": 63488 00:11:01.798 }, 00:11:01.798 { 00:11:01.798 "name": null, 00:11:01.798 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:01.798 "is_configured": false, 00:11:01.798 "data_offset": 2048, 00:11:01.798 "data_size": 63488 00:11:01.798 } 00:11:01.798 ] 00:11:01.798 }' 00:11:01.798 13:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:01.798 13:20:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.366 13:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:11:02.366 13:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:02.366 13:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:02.366 13:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.366 13:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.366 [2024-11-17 13:20:51.367888] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:02.366 [2024-11-17 13:20:51.367951] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:02.366 [2024-11-17 13:20:51.367989] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:11:02.366 [2024-11-17 13:20:51.368000] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:02.366 [2024-11-17 13:20:51.368497] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:02.366 [2024-11-17 13:20:51.368526] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:02.366 [2024-11-17 13:20:51.368617] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:02.366 [2024-11-17 13:20:51.368661] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:02.366 pt2 00:11:02.366 13:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.366 13:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:02.366 13:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:02.366 13:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:02.366 13:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.366 13:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.366 [2024-11-17 13:20:51.375862] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:02.366 [2024-11-17 13:20:51.375914] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:02.366 [2024-11-17 13:20:51.375939] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:11:02.366 [2024-11-17 13:20:51.375952] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:02.366 [2024-11-17 13:20:51.376366] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:02.366 [2024-11-17 13:20:51.376393] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:02.366 [2024-11-17 13:20:51.376458] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:02.366 [2024-11-17 13:20:51.376476] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:02.366 pt3 00:11:02.366 13:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.366 13:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:02.366 13:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:02.366 13:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:02.366 13:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.366 13:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.366 [2024-11-17 13:20:51.383817] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:02.366 [2024-11-17 13:20:51.383866] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:02.366 [2024-11-17 13:20:51.383884] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:11:02.366 [2024-11-17 13:20:51.383892] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:02.366 [2024-11-17 13:20:51.384311] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:02.366 [2024-11-17 13:20:51.384336] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:02.366 [2024-11-17 13:20:51.384399] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:11:02.366 [2024-11-17 13:20:51.384418] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:02.366 [2024-11-17 13:20:51.384582] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:02.366 [2024-11-17 13:20:51.384601] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:02.366 [2024-11-17 13:20:51.384866] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:11:02.366 [2024-11-17 13:20:51.385035] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:02.366 [2024-11-17 13:20:51.385058] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:11:02.366 [2024-11-17 13:20:51.385204] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:02.366 pt4 00:11:02.366 13:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.366 13:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:02.366 13:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:02.366 13:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:02.366 13:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:02.366 13:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:02.366 13:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:02.366 13:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:02.366 13:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:02.366 13:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:02.366 13:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:02.366 13:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:02.366 13:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:02.366 13:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:02.366 13:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:02.366 13:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.366 13:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.366 13:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.366 13:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:02.366 "name": "raid_bdev1", 00:11:02.366 "uuid": "e1c6e498-9c18-4021-953a-0bb34d3761f3", 00:11:02.366 "strip_size_kb": 64, 00:11:02.366 "state": "online", 00:11:02.366 "raid_level": "concat", 00:11:02.366 "superblock": true, 00:11:02.366 "num_base_bdevs": 4, 00:11:02.366 "num_base_bdevs_discovered": 4, 00:11:02.366 "num_base_bdevs_operational": 4, 00:11:02.366 "base_bdevs_list": [ 00:11:02.366 { 00:11:02.366 "name": "pt1", 00:11:02.366 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:02.366 "is_configured": true, 00:11:02.366 "data_offset": 2048, 00:11:02.367 "data_size": 63488 00:11:02.367 }, 00:11:02.367 { 00:11:02.367 "name": "pt2", 00:11:02.367 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:02.367 "is_configured": true, 00:11:02.367 "data_offset": 2048, 00:11:02.367 "data_size": 63488 00:11:02.367 }, 00:11:02.367 { 00:11:02.367 "name": "pt3", 00:11:02.367 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:02.367 "is_configured": true, 00:11:02.367 "data_offset": 2048, 00:11:02.367 "data_size": 63488 00:11:02.367 }, 00:11:02.367 { 00:11:02.367 "name": "pt4", 00:11:02.367 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:02.367 "is_configured": true, 00:11:02.367 "data_offset": 2048, 00:11:02.367 "data_size": 63488 00:11:02.367 } 00:11:02.367 ] 00:11:02.367 }' 00:11:02.367 13:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:02.367 13:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.625 13:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:11:02.625 13:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:02.625 13:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:02.625 13:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:02.625 13:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:02.625 13:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:02.625 13:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:02.625 13:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:02.625 13:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.625 13:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.625 [2024-11-17 13:20:51.803656] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:02.625 13:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.625 13:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:02.625 "name": "raid_bdev1", 00:11:02.625 "aliases": [ 00:11:02.625 "e1c6e498-9c18-4021-953a-0bb34d3761f3" 00:11:02.625 ], 00:11:02.625 "product_name": "Raid Volume", 00:11:02.625 "block_size": 512, 00:11:02.625 "num_blocks": 253952, 00:11:02.625 "uuid": "e1c6e498-9c18-4021-953a-0bb34d3761f3", 00:11:02.625 "assigned_rate_limits": { 00:11:02.625 "rw_ios_per_sec": 0, 00:11:02.625 "rw_mbytes_per_sec": 0, 00:11:02.625 "r_mbytes_per_sec": 0, 00:11:02.625 "w_mbytes_per_sec": 0 00:11:02.625 }, 00:11:02.625 "claimed": false, 00:11:02.625 "zoned": false, 00:11:02.625 "supported_io_types": { 00:11:02.625 "read": true, 00:11:02.625 "write": true, 00:11:02.625 "unmap": true, 00:11:02.625 "flush": true, 00:11:02.625 "reset": true, 00:11:02.625 "nvme_admin": false, 00:11:02.625 "nvme_io": false, 00:11:02.625 "nvme_io_md": false, 00:11:02.625 "write_zeroes": true, 00:11:02.625 "zcopy": false, 00:11:02.625 "get_zone_info": false, 00:11:02.625 "zone_management": false, 00:11:02.625 "zone_append": false, 00:11:02.625 "compare": false, 00:11:02.625 "compare_and_write": false, 00:11:02.625 "abort": false, 00:11:02.625 "seek_hole": false, 00:11:02.625 "seek_data": false, 00:11:02.625 "copy": false, 00:11:02.625 "nvme_iov_md": false 00:11:02.625 }, 00:11:02.625 "memory_domains": [ 00:11:02.625 { 00:11:02.625 "dma_device_id": "system", 00:11:02.625 "dma_device_type": 1 00:11:02.625 }, 00:11:02.625 { 00:11:02.625 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:02.625 "dma_device_type": 2 00:11:02.625 }, 00:11:02.625 { 00:11:02.625 "dma_device_id": "system", 00:11:02.625 "dma_device_type": 1 00:11:02.625 }, 00:11:02.625 { 00:11:02.625 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:02.625 "dma_device_type": 2 00:11:02.625 }, 00:11:02.625 { 00:11:02.625 "dma_device_id": "system", 00:11:02.625 "dma_device_type": 1 00:11:02.625 }, 00:11:02.625 { 00:11:02.625 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:02.625 "dma_device_type": 2 00:11:02.625 }, 00:11:02.625 { 00:11:02.625 "dma_device_id": "system", 00:11:02.625 "dma_device_type": 1 00:11:02.625 }, 00:11:02.625 { 00:11:02.626 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:02.626 "dma_device_type": 2 00:11:02.626 } 00:11:02.626 ], 00:11:02.626 "driver_specific": { 00:11:02.626 "raid": { 00:11:02.626 "uuid": "e1c6e498-9c18-4021-953a-0bb34d3761f3", 00:11:02.626 "strip_size_kb": 64, 00:11:02.626 "state": "online", 00:11:02.626 "raid_level": "concat", 00:11:02.626 "superblock": true, 00:11:02.626 "num_base_bdevs": 4, 00:11:02.626 "num_base_bdevs_discovered": 4, 00:11:02.626 "num_base_bdevs_operational": 4, 00:11:02.626 "base_bdevs_list": [ 00:11:02.626 { 00:11:02.626 "name": "pt1", 00:11:02.626 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:02.626 "is_configured": true, 00:11:02.626 "data_offset": 2048, 00:11:02.626 "data_size": 63488 00:11:02.626 }, 00:11:02.626 { 00:11:02.626 "name": "pt2", 00:11:02.626 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:02.626 "is_configured": true, 00:11:02.626 "data_offset": 2048, 00:11:02.626 "data_size": 63488 00:11:02.626 }, 00:11:02.626 { 00:11:02.626 "name": "pt3", 00:11:02.626 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:02.626 "is_configured": true, 00:11:02.626 "data_offset": 2048, 00:11:02.626 "data_size": 63488 00:11:02.626 }, 00:11:02.626 { 00:11:02.626 "name": "pt4", 00:11:02.626 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:02.626 "is_configured": true, 00:11:02.626 "data_offset": 2048, 00:11:02.626 "data_size": 63488 00:11:02.626 } 00:11:02.626 ] 00:11:02.626 } 00:11:02.626 } 00:11:02.626 }' 00:11:02.626 13:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:02.884 13:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:02.884 pt2 00:11:02.884 pt3 00:11:02.884 pt4' 00:11:02.884 13:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:02.884 13:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:02.884 13:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:02.884 13:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:02.884 13:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:02.884 13:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.884 13:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.884 13:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.884 13:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:02.884 13:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:02.884 13:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:02.884 13:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:02.884 13:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.884 13:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.884 13:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:02.884 13:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.884 13:20:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:02.884 13:20:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:02.884 13:20:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:02.884 13:20:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:02.884 13:20:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.884 13:20:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.884 13:20:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:02.884 13:20:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.884 13:20:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:02.884 13:20:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:02.884 13:20:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:02.884 13:20:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:11:02.884 13:20:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:02.884 13:20:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.884 13:20:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.884 13:20:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.143 13:20:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:03.143 13:20:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:03.143 13:20:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:03.143 13:20:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:11:03.143 13:20:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.143 13:20:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.143 [2024-11-17 13:20:52.131046] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:03.143 13:20:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.143 13:20:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' e1c6e498-9c18-4021-953a-0bb34d3761f3 '!=' e1c6e498-9c18-4021-953a-0bb34d3761f3 ']' 00:11:03.143 13:20:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:11:03.143 13:20:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:03.143 13:20:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:03.143 13:20:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 72534 00:11:03.143 13:20:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 72534 ']' 00:11:03.143 13:20:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 72534 00:11:03.143 13:20:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:11:03.143 13:20:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:03.143 13:20:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72534 00:11:03.143 killing process with pid 72534 00:11:03.143 13:20:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:03.143 13:20:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:03.143 13:20:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72534' 00:11:03.143 13:20:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 72534 00:11:03.143 13:20:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 72534 00:11:03.143 [2024-11-17 13:20:52.196277] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:03.143 [2024-11-17 13:20:52.196372] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:03.143 [2024-11-17 13:20:52.196454] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:03.143 [2024-11-17 13:20:52.196481] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:11:03.402 [2024-11-17 13:20:52.614140] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:04.778 13:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:11:04.778 00:11:04.778 real 0m5.426s 00:11:04.778 user 0m7.641s 00:11:04.778 sys 0m0.969s 00:11:04.778 13:20:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:04.778 ************************************ 00:11:04.778 END TEST raid_superblock_test 00:11:04.778 ************************************ 00:11:04.778 13:20:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.778 13:20:53 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 4 read 00:11:04.778 13:20:53 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:04.778 13:20:53 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:04.778 13:20:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:04.778 ************************************ 00:11:04.778 START TEST raid_read_error_test 00:11:04.778 ************************************ 00:11:04.778 13:20:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 4 read 00:11:04.778 13:20:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:11:04.778 13:20:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:04.778 13:20:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:11:04.778 13:20:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:04.778 13:20:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:04.778 13:20:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:04.778 13:20:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:04.778 13:20:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:04.778 13:20:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:04.778 13:20:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:04.778 13:20:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:04.778 13:20:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:04.778 13:20:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:04.778 13:20:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:04.778 13:20:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:04.778 13:20:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:04.778 13:20:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:04.778 13:20:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:04.778 13:20:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:04.778 13:20:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:04.778 13:20:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:04.778 13:20:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:04.778 13:20:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:04.778 13:20:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:04.778 13:20:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:11:04.778 13:20:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:04.778 13:20:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:04.779 13:20:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:04.779 13:20:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.IqxByiBlFU 00:11:04.779 13:20:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=72800 00:11:04.779 13:20:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 72800 00:11:04.779 13:20:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 72800 ']' 00:11:04.779 13:20:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:04.779 13:20:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:04.779 13:20:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:04.779 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:04.779 13:20:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:04.779 13:20:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.779 13:20:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:04.779 [2024-11-17 13:20:53.918772] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:11:04.779 [2024-11-17 13:20:53.918884] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72800 ] 00:11:05.037 [2024-11-17 13:20:54.094379] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:05.037 [2024-11-17 13:20:54.216216] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:05.296 [2024-11-17 13:20:54.429321] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:05.296 [2024-11-17 13:20:54.429382] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:05.554 13:20:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:05.554 13:20:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:05.554 13:20:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:05.555 13:20:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:05.555 13:20:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.555 13:20:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.814 BaseBdev1_malloc 00:11:05.814 13:20:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.814 13:20:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:05.814 13:20:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.814 13:20:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.814 true 00:11:05.814 13:20:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.814 13:20:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:05.814 13:20:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.814 13:20:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.814 [2024-11-17 13:20:54.805799] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:05.814 [2024-11-17 13:20:54.805863] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:05.814 [2024-11-17 13:20:54.805883] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:05.814 [2024-11-17 13:20:54.805897] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:05.814 [2024-11-17 13:20:54.808178] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:05.814 [2024-11-17 13:20:54.808231] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:05.814 BaseBdev1 00:11:05.814 13:20:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.814 13:20:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:05.814 13:20:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:05.814 13:20:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.814 13:20:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.815 BaseBdev2_malloc 00:11:05.815 13:20:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.815 13:20:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:05.815 13:20:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.815 13:20:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.815 true 00:11:05.815 13:20:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.815 13:20:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:05.815 13:20:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.815 13:20:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.815 [2024-11-17 13:20:54.862284] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:05.815 [2024-11-17 13:20:54.862341] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:05.815 [2024-11-17 13:20:54.862357] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:05.815 [2024-11-17 13:20:54.862368] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:05.815 [2024-11-17 13:20:54.864565] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:05.815 [2024-11-17 13:20:54.864605] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:05.815 BaseBdev2 00:11:05.815 13:20:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.815 13:20:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:05.815 13:20:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:05.815 13:20:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.815 13:20:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.815 BaseBdev3_malloc 00:11:05.815 13:20:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.815 13:20:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:05.815 13:20:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.815 13:20:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.815 true 00:11:05.815 13:20:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.815 13:20:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:05.815 13:20:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.815 13:20:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.815 [2024-11-17 13:20:54.930637] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:05.815 [2024-11-17 13:20:54.930689] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:05.815 [2024-11-17 13:20:54.930707] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:05.815 [2024-11-17 13:20:54.930718] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:05.815 [2024-11-17 13:20:54.932929] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:05.815 [2024-11-17 13:20:54.932969] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:05.815 BaseBdev3 00:11:05.815 13:20:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.815 13:20:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:05.815 13:20:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:05.815 13:20:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.815 13:20:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.815 BaseBdev4_malloc 00:11:05.815 13:20:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.815 13:20:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:05.815 13:20:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.815 13:20:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.815 true 00:11:05.815 13:20:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.815 13:20:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:05.815 13:20:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.815 13:20:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.815 [2024-11-17 13:20:54.990265] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:05.815 [2024-11-17 13:20:54.990333] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:05.815 [2024-11-17 13:20:54.990352] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:05.815 [2024-11-17 13:20:54.990364] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:05.815 [2024-11-17 13:20:54.992548] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:05.815 [2024-11-17 13:20:54.992589] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:05.815 BaseBdev4 00:11:05.815 13:20:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.815 13:20:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:05.815 13:20:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.815 13:20:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.815 [2024-11-17 13:20:54.998323] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:05.815 [2024-11-17 13:20:55.000124] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:05.815 [2024-11-17 13:20:55.000201] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:05.815 [2024-11-17 13:20:55.000280] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:05.815 [2024-11-17 13:20:55.000520] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:11:05.815 [2024-11-17 13:20:55.000543] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:05.815 [2024-11-17 13:20:55.000788] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:11:05.815 [2024-11-17 13:20:55.000955] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:11:05.815 [2024-11-17 13:20:55.000974] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:11:05.815 [2024-11-17 13:20:55.001172] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:05.815 13:20:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.815 13:20:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:05.815 13:20:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:05.815 13:20:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:05.815 13:20:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:05.815 13:20:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:05.815 13:20:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:05.815 13:20:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:05.815 13:20:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:05.815 13:20:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:05.815 13:20:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:05.815 13:20:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:05.815 13:20:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:05.815 13:20:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.815 13:20:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.815 13:20:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.074 13:20:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:06.074 "name": "raid_bdev1", 00:11:06.074 "uuid": "4635db7c-8823-45d3-ba0b-e38c2c211fec", 00:11:06.074 "strip_size_kb": 64, 00:11:06.074 "state": "online", 00:11:06.074 "raid_level": "concat", 00:11:06.074 "superblock": true, 00:11:06.074 "num_base_bdevs": 4, 00:11:06.074 "num_base_bdevs_discovered": 4, 00:11:06.074 "num_base_bdevs_operational": 4, 00:11:06.074 "base_bdevs_list": [ 00:11:06.074 { 00:11:06.074 "name": "BaseBdev1", 00:11:06.074 "uuid": "c9394e85-2bc7-5646-b70b-22179787f5ba", 00:11:06.074 "is_configured": true, 00:11:06.074 "data_offset": 2048, 00:11:06.074 "data_size": 63488 00:11:06.074 }, 00:11:06.074 { 00:11:06.074 "name": "BaseBdev2", 00:11:06.074 "uuid": "c8a3cfe0-d430-56b9-897e-224f8f751157", 00:11:06.074 "is_configured": true, 00:11:06.074 "data_offset": 2048, 00:11:06.074 "data_size": 63488 00:11:06.074 }, 00:11:06.074 { 00:11:06.074 "name": "BaseBdev3", 00:11:06.074 "uuid": "d4ddaad3-2f44-560d-9fc4-b33db364af1f", 00:11:06.074 "is_configured": true, 00:11:06.074 "data_offset": 2048, 00:11:06.074 "data_size": 63488 00:11:06.074 }, 00:11:06.074 { 00:11:06.074 "name": "BaseBdev4", 00:11:06.074 "uuid": "c062d55f-e752-5389-a355-620903498d9f", 00:11:06.074 "is_configured": true, 00:11:06.074 "data_offset": 2048, 00:11:06.074 "data_size": 63488 00:11:06.074 } 00:11:06.074 ] 00:11:06.074 }' 00:11:06.074 13:20:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:06.074 13:20:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.332 13:20:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:06.332 13:20:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:06.332 [2024-11-17 13:20:55.534675] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:11:07.268 13:20:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:11:07.268 13:20:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.268 13:20:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.268 13:20:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.268 13:20:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:07.268 13:20:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:11:07.268 13:20:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:11:07.268 13:20:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:07.268 13:20:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:07.268 13:20:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:07.268 13:20:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:07.268 13:20:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:07.268 13:20:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:07.268 13:20:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:07.268 13:20:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:07.268 13:20:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:07.268 13:20:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:07.268 13:20:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:07.268 13:20:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.268 13:20:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:07.268 13:20:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.268 13:20:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.527 13:20:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:07.527 "name": "raid_bdev1", 00:11:07.527 "uuid": "4635db7c-8823-45d3-ba0b-e38c2c211fec", 00:11:07.527 "strip_size_kb": 64, 00:11:07.527 "state": "online", 00:11:07.527 "raid_level": "concat", 00:11:07.527 "superblock": true, 00:11:07.527 "num_base_bdevs": 4, 00:11:07.527 "num_base_bdevs_discovered": 4, 00:11:07.527 "num_base_bdevs_operational": 4, 00:11:07.527 "base_bdevs_list": [ 00:11:07.527 { 00:11:07.527 "name": "BaseBdev1", 00:11:07.527 "uuid": "c9394e85-2bc7-5646-b70b-22179787f5ba", 00:11:07.527 "is_configured": true, 00:11:07.527 "data_offset": 2048, 00:11:07.527 "data_size": 63488 00:11:07.527 }, 00:11:07.527 { 00:11:07.527 "name": "BaseBdev2", 00:11:07.527 "uuid": "c8a3cfe0-d430-56b9-897e-224f8f751157", 00:11:07.527 "is_configured": true, 00:11:07.527 "data_offset": 2048, 00:11:07.527 "data_size": 63488 00:11:07.527 }, 00:11:07.527 { 00:11:07.527 "name": "BaseBdev3", 00:11:07.527 "uuid": "d4ddaad3-2f44-560d-9fc4-b33db364af1f", 00:11:07.527 "is_configured": true, 00:11:07.527 "data_offset": 2048, 00:11:07.527 "data_size": 63488 00:11:07.527 }, 00:11:07.527 { 00:11:07.527 "name": "BaseBdev4", 00:11:07.527 "uuid": "c062d55f-e752-5389-a355-620903498d9f", 00:11:07.527 "is_configured": true, 00:11:07.527 "data_offset": 2048, 00:11:07.527 "data_size": 63488 00:11:07.527 } 00:11:07.527 ] 00:11:07.527 }' 00:11:07.527 13:20:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:07.527 13:20:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.786 13:20:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:07.786 13:20:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.786 13:20:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.786 [2024-11-17 13:20:56.898839] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:07.786 [2024-11-17 13:20:56.898878] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:07.786 [2024-11-17 13:20:56.901708] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:07.786 [2024-11-17 13:20:56.901774] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:07.786 [2024-11-17 13:20:56.901818] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:07.786 [2024-11-17 13:20:56.901833] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:11:07.786 13:20:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.786 13:20:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 72800 00:11:07.786 13:20:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 72800 ']' 00:11:07.786 { 00:11:07.786 "results": [ 00:11:07.786 { 00:11:07.786 "job": "raid_bdev1", 00:11:07.786 "core_mask": "0x1", 00:11:07.786 "workload": "randrw", 00:11:07.786 "percentage": 50, 00:11:07.786 "status": "finished", 00:11:07.786 "queue_depth": 1, 00:11:07.786 "io_size": 131072, 00:11:07.786 "runtime": 1.364796, 00:11:07.786 "iops": 15318.772915512647, 00:11:07.786 "mibps": 1914.846614439081, 00:11:07.786 "io_failed": 1, 00:11:07.786 "io_timeout": 0, 00:11:07.786 "avg_latency_us": 90.55494489061248, 00:11:07.786 "min_latency_us": 26.941484716157206, 00:11:07.786 "max_latency_us": 1502.46288209607 00:11:07.786 } 00:11:07.786 ], 00:11:07.786 "core_count": 1 00:11:07.786 } 00:11:07.786 13:20:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 72800 00:11:07.786 13:20:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:11:07.786 13:20:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:07.786 13:20:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72800 00:11:07.786 killing process with pid 72800 00:11:07.786 13:20:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:07.786 13:20:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:07.786 13:20:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72800' 00:11:07.786 13:20:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 72800 00:11:07.786 13:20:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 72800 00:11:07.786 [2024-11-17 13:20:56.947827] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:08.362 [2024-11-17 13:20:57.282981] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:09.307 13:20:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:09.307 13:20:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.IqxByiBlFU 00:11:09.307 13:20:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:09.307 13:20:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:11:09.307 13:20:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:11:09.307 13:20:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:09.307 13:20:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:09.307 13:20:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:11:09.307 00:11:09.307 real 0m4.690s 00:11:09.307 user 0m5.488s 00:11:09.307 sys 0m0.594s 00:11:09.307 13:20:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:09.307 13:20:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.307 ************************************ 00:11:09.307 END TEST raid_read_error_test 00:11:09.307 ************************************ 00:11:09.565 13:20:58 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 4 write 00:11:09.565 13:20:58 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:09.565 13:20:58 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:09.565 13:20:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:09.565 ************************************ 00:11:09.565 START TEST raid_write_error_test 00:11:09.565 ************************************ 00:11:09.565 13:20:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 4 write 00:11:09.565 13:20:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:11:09.565 13:20:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:09.565 13:20:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:11:09.565 13:20:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:09.565 13:20:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:09.565 13:20:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:09.565 13:20:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:09.565 13:20:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:09.565 13:20:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:09.565 13:20:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:09.565 13:20:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:09.565 13:20:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:09.565 13:20:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:09.566 13:20:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:09.566 13:20:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:09.566 13:20:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:09.566 13:20:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:09.566 13:20:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:09.566 13:20:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:09.566 13:20:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:09.566 13:20:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:09.566 13:20:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:09.566 13:20:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:09.566 13:20:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:09.566 13:20:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:11:09.566 13:20:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:09.566 13:20:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:09.566 13:20:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:09.566 13:20:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.80B0L4Rx72 00:11:09.566 13:20:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=72945 00:11:09.566 13:20:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 72945 00:11:09.566 13:20:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:09.566 13:20:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 72945 ']' 00:11:09.566 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:09.566 13:20:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:09.566 13:20:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:09.566 13:20:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:09.566 13:20:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:09.566 13:20:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.566 [2024-11-17 13:20:58.681901] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:11:09.566 [2024-11-17 13:20:58.682023] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72945 ] 00:11:09.824 [2024-11-17 13:20:58.857310] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:09.824 [2024-11-17 13:20:58.971196] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:10.081 [2024-11-17 13:20:59.161688] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:10.081 [2024-11-17 13:20:59.161749] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:10.339 13:20:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:10.339 13:20:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:10.339 13:20:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:10.339 13:20:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:10.339 13:20:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.339 13:20:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.598 BaseBdev1_malloc 00:11:10.598 13:20:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.598 13:20:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:10.598 13:20:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.598 13:20:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.598 true 00:11:10.598 13:20:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.598 13:20:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:10.598 13:20:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.598 13:20:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.598 [2024-11-17 13:20:59.582239] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:10.598 [2024-11-17 13:20:59.582291] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:10.598 [2024-11-17 13:20:59.582311] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:10.598 [2024-11-17 13:20:59.582322] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:10.598 [2024-11-17 13:20:59.584527] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:10.598 [2024-11-17 13:20:59.584571] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:10.598 BaseBdev1 00:11:10.598 13:20:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.598 13:20:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:10.598 13:20:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:10.598 13:20:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.598 13:20:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.598 BaseBdev2_malloc 00:11:10.598 13:20:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.598 13:20:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:10.598 13:20:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.598 13:20:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.598 true 00:11:10.598 13:20:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.598 13:20:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:10.598 13:20:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.598 13:20:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.598 [2024-11-17 13:20:59.649528] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:10.598 [2024-11-17 13:20:59.649621] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:10.598 [2024-11-17 13:20:59.649641] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:10.598 [2024-11-17 13:20:59.649652] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:10.598 [2024-11-17 13:20:59.651961] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:10.598 [2024-11-17 13:20:59.652001] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:10.598 BaseBdev2 00:11:10.598 13:20:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.598 13:20:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:10.598 13:20:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:10.598 13:20:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.598 13:20:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.598 BaseBdev3_malloc 00:11:10.598 13:20:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.598 13:20:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:10.598 13:20:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.598 13:20:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.598 true 00:11:10.598 13:20:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.598 13:20:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:10.598 13:20:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.598 13:20:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.598 [2024-11-17 13:20:59.730614] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:10.598 [2024-11-17 13:20:59.730668] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:10.598 [2024-11-17 13:20:59.730686] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:10.598 [2024-11-17 13:20:59.730696] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:10.598 [2024-11-17 13:20:59.732926] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:10.598 [2024-11-17 13:20:59.733025] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:10.598 BaseBdev3 00:11:10.598 13:20:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.598 13:20:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:10.598 13:20:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:10.598 13:20:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.598 13:20:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.598 BaseBdev4_malloc 00:11:10.598 13:20:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.598 13:20:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:10.598 13:20:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.598 13:20:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.598 true 00:11:10.598 13:20:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.598 13:20:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:10.598 13:20:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.598 13:20:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.598 [2024-11-17 13:20:59.799656] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:10.598 [2024-11-17 13:20:59.799747] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:10.599 [2024-11-17 13:20:59.799769] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:10.599 [2024-11-17 13:20:59.799780] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:10.599 [2024-11-17 13:20:59.802072] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:10.599 [2024-11-17 13:20:59.802114] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:10.599 BaseBdev4 00:11:10.599 13:20:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.599 13:20:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:10.599 13:20:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.599 13:20:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.599 [2024-11-17 13:20:59.811699] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:10.599 [2024-11-17 13:20:59.813571] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:10.599 [2024-11-17 13:20:59.813651] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:10.599 [2024-11-17 13:20:59.813723] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:10.599 [2024-11-17 13:20:59.813980] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:11:10.599 [2024-11-17 13:20:59.813993] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:10.599 [2024-11-17 13:20:59.814245] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:11:10.599 [2024-11-17 13:20:59.814406] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:11:10.599 [2024-11-17 13:20:59.814417] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:11:10.599 [2024-11-17 13:20:59.814566] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:10.599 13:20:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.599 13:20:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:10.599 13:20:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:10.599 13:20:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:10.599 13:20:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:10.599 13:20:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:10.599 13:20:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:10.599 13:20:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:10.599 13:20:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:10.599 13:20:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:10.599 13:20:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:10.857 13:20:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:10.857 13:20:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:10.857 13:20:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.857 13:20:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.857 13:20:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.857 13:20:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:10.857 "name": "raid_bdev1", 00:11:10.857 "uuid": "1854f88a-0eea-43ba-a6c7-09ef81e04145", 00:11:10.857 "strip_size_kb": 64, 00:11:10.857 "state": "online", 00:11:10.857 "raid_level": "concat", 00:11:10.857 "superblock": true, 00:11:10.857 "num_base_bdevs": 4, 00:11:10.857 "num_base_bdevs_discovered": 4, 00:11:10.857 "num_base_bdevs_operational": 4, 00:11:10.857 "base_bdevs_list": [ 00:11:10.857 { 00:11:10.857 "name": "BaseBdev1", 00:11:10.857 "uuid": "23ed4664-ad02-575b-9141-e9c1a500e0fe", 00:11:10.857 "is_configured": true, 00:11:10.857 "data_offset": 2048, 00:11:10.857 "data_size": 63488 00:11:10.857 }, 00:11:10.857 { 00:11:10.857 "name": "BaseBdev2", 00:11:10.857 "uuid": "33b201d1-b16d-5035-9ed6-0f0431421463", 00:11:10.857 "is_configured": true, 00:11:10.857 "data_offset": 2048, 00:11:10.857 "data_size": 63488 00:11:10.857 }, 00:11:10.857 { 00:11:10.857 "name": "BaseBdev3", 00:11:10.857 "uuid": "ddba3984-013c-549b-93ad-0539e64061e4", 00:11:10.857 "is_configured": true, 00:11:10.857 "data_offset": 2048, 00:11:10.857 "data_size": 63488 00:11:10.857 }, 00:11:10.857 { 00:11:10.857 "name": "BaseBdev4", 00:11:10.857 "uuid": "24e98ed2-46bc-5b07-bf49-dc4091e6482e", 00:11:10.857 "is_configured": true, 00:11:10.857 "data_offset": 2048, 00:11:10.857 "data_size": 63488 00:11:10.857 } 00:11:10.857 ] 00:11:10.857 }' 00:11:10.858 13:20:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:10.858 13:20:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.116 13:21:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:11.116 13:21:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:11.116 [2024-11-17 13:21:00.332177] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:11:12.051 13:21:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:11:12.051 13:21:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.051 13:21:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.051 13:21:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.051 13:21:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:12.051 13:21:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:11:12.051 13:21:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:11:12.051 13:21:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:12.051 13:21:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:12.051 13:21:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:12.051 13:21:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:12.051 13:21:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:12.051 13:21:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:12.051 13:21:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:12.051 13:21:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:12.051 13:21:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:12.051 13:21:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:12.051 13:21:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:12.051 13:21:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:12.051 13:21:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.051 13:21:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.309 13:21:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.309 13:21:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:12.309 "name": "raid_bdev1", 00:11:12.309 "uuid": "1854f88a-0eea-43ba-a6c7-09ef81e04145", 00:11:12.309 "strip_size_kb": 64, 00:11:12.309 "state": "online", 00:11:12.309 "raid_level": "concat", 00:11:12.309 "superblock": true, 00:11:12.309 "num_base_bdevs": 4, 00:11:12.309 "num_base_bdevs_discovered": 4, 00:11:12.309 "num_base_bdevs_operational": 4, 00:11:12.309 "base_bdevs_list": [ 00:11:12.309 { 00:11:12.309 "name": "BaseBdev1", 00:11:12.309 "uuid": "23ed4664-ad02-575b-9141-e9c1a500e0fe", 00:11:12.309 "is_configured": true, 00:11:12.309 "data_offset": 2048, 00:11:12.309 "data_size": 63488 00:11:12.309 }, 00:11:12.309 { 00:11:12.309 "name": "BaseBdev2", 00:11:12.309 "uuid": "33b201d1-b16d-5035-9ed6-0f0431421463", 00:11:12.309 "is_configured": true, 00:11:12.309 "data_offset": 2048, 00:11:12.309 "data_size": 63488 00:11:12.309 }, 00:11:12.309 { 00:11:12.309 "name": "BaseBdev3", 00:11:12.309 "uuid": "ddba3984-013c-549b-93ad-0539e64061e4", 00:11:12.309 "is_configured": true, 00:11:12.309 "data_offset": 2048, 00:11:12.309 "data_size": 63488 00:11:12.309 }, 00:11:12.309 { 00:11:12.309 "name": "BaseBdev4", 00:11:12.309 "uuid": "24e98ed2-46bc-5b07-bf49-dc4091e6482e", 00:11:12.309 "is_configured": true, 00:11:12.309 "data_offset": 2048, 00:11:12.309 "data_size": 63488 00:11:12.309 } 00:11:12.309 ] 00:11:12.309 }' 00:11:12.309 13:21:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:12.309 13:21:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.568 13:21:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:12.568 13:21:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.568 13:21:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.568 [2024-11-17 13:21:01.732502] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:12.568 [2024-11-17 13:21:01.732627] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:12.568 [2024-11-17 13:21:01.735557] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:12.568 [2024-11-17 13:21:01.735689] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:12.568 [2024-11-17 13:21:01.735777] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:12.568 [2024-11-17 13:21:01.735851] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:11:12.568 { 00:11:12.568 "results": [ 00:11:12.568 { 00:11:12.568 "job": "raid_bdev1", 00:11:12.568 "core_mask": "0x1", 00:11:12.568 "workload": "randrw", 00:11:12.568 "percentage": 50, 00:11:12.568 "status": "finished", 00:11:12.568 "queue_depth": 1, 00:11:12.568 "io_size": 131072, 00:11:12.568 "runtime": 1.401199, 00:11:12.568 "iops": 15308.318090435405, 00:11:12.568 "mibps": 1913.5397613044256, 00:11:12.568 "io_failed": 1, 00:11:12.568 "io_timeout": 0, 00:11:12.568 "avg_latency_us": 90.63730134220798, 00:11:12.568 "min_latency_us": 27.83580786026201, 00:11:12.568 "max_latency_us": 1616.9362445414847 00:11:12.568 } 00:11:12.568 ], 00:11:12.568 "core_count": 1 00:11:12.568 } 00:11:12.568 13:21:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.568 13:21:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 72945 00:11:12.568 13:21:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 72945 ']' 00:11:12.568 13:21:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 72945 00:11:12.568 13:21:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:11:12.568 13:21:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:12.568 13:21:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72945 00:11:12.568 13:21:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:12.568 13:21:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:12.568 13:21:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72945' 00:11:12.568 killing process with pid 72945 00:11:12.568 13:21:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 72945 00:11:12.568 [2024-11-17 13:21:01.782559] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:12.568 13:21:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 72945 00:11:13.138 [2024-11-17 13:21:02.125726] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:14.513 13:21:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:14.513 13:21:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.80B0L4Rx72 00:11:14.513 13:21:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:14.513 13:21:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:11:14.513 13:21:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:11:14.513 13:21:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:14.513 13:21:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:14.513 13:21:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:11:14.513 00:11:14.513 real 0m4.776s 00:11:14.513 user 0m5.644s 00:11:14.513 sys 0m0.548s 00:11:14.513 ************************************ 00:11:14.513 END TEST raid_write_error_test 00:11:14.514 ************************************ 00:11:14.514 13:21:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:14.514 13:21:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.514 13:21:03 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:11:14.514 13:21:03 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:11:14.514 13:21:03 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:14.514 13:21:03 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:14.514 13:21:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:14.514 ************************************ 00:11:14.514 START TEST raid_state_function_test 00:11:14.514 ************************************ 00:11:14.514 13:21:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 4 false 00:11:14.514 13:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:11:14.514 13:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:11:14.514 13:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:11:14.514 13:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:14.514 13:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:14.514 13:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:14.514 13:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:14.514 13:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:14.514 13:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:14.514 13:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:14.514 13:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:14.514 13:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:14.514 13:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:14.514 13:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:14.514 13:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:14.514 13:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:11:14.514 13:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:14.514 13:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:14.514 13:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:14.514 13:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:14.514 13:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:14.514 13:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:14.514 13:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:14.514 13:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:14.514 13:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:11:14.514 13:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:11:14.514 13:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:11:14.514 13:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:11:14.514 13:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=73089 00:11:14.514 13:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:14.514 13:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73089' 00:11:14.514 Process raid pid: 73089 00:11:14.514 13:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 73089 00:11:14.514 13:21:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 73089 ']' 00:11:14.514 13:21:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:14.514 13:21:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:14.514 13:21:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:14.514 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:14.514 13:21:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:14.514 13:21:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.514 [2024-11-17 13:21:03.528433] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:11:14.514 [2024-11-17 13:21:03.528584] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:14.514 [2024-11-17 13:21:03.699286] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:14.772 [2024-11-17 13:21:03.816002] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:15.031 [2024-11-17 13:21:04.037326] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:15.031 [2024-11-17 13:21:04.037358] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:15.290 13:21:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:15.290 13:21:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:11:15.290 13:21:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:15.290 13:21:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.290 13:21:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.290 [2024-11-17 13:21:04.364730] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:15.290 [2024-11-17 13:21:04.364784] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:15.290 [2024-11-17 13:21:04.364795] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:15.290 [2024-11-17 13:21:04.364804] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:15.290 [2024-11-17 13:21:04.364811] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:15.290 [2024-11-17 13:21:04.364819] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:15.290 [2024-11-17 13:21:04.364825] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:15.290 [2024-11-17 13:21:04.364833] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:15.290 13:21:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.290 13:21:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:15.290 13:21:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:15.290 13:21:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:15.290 13:21:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:15.290 13:21:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:15.290 13:21:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:15.291 13:21:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:15.291 13:21:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:15.291 13:21:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:15.291 13:21:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:15.291 13:21:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:15.291 13:21:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.291 13:21:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.291 13:21:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:15.291 13:21:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.291 13:21:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:15.291 "name": "Existed_Raid", 00:11:15.291 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:15.291 "strip_size_kb": 0, 00:11:15.291 "state": "configuring", 00:11:15.291 "raid_level": "raid1", 00:11:15.291 "superblock": false, 00:11:15.291 "num_base_bdevs": 4, 00:11:15.291 "num_base_bdevs_discovered": 0, 00:11:15.291 "num_base_bdevs_operational": 4, 00:11:15.291 "base_bdevs_list": [ 00:11:15.291 { 00:11:15.291 "name": "BaseBdev1", 00:11:15.291 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:15.291 "is_configured": false, 00:11:15.291 "data_offset": 0, 00:11:15.291 "data_size": 0 00:11:15.291 }, 00:11:15.291 { 00:11:15.291 "name": "BaseBdev2", 00:11:15.291 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:15.291 "is_configured": false, 00:11:15.291 "data_offset": 0, 00:11:15.291 "data_size": 0 00:11:15.291 }, 00:11:15.291 { 00:11:15.291 "name": "BaseBdev3", 00:11:15.291 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:15.291 "is_configured": false, 00:11:15.291 "data_offset": 0, 00:11:15.291 "data_size": 0 00:11:15.291 }, 00:11:15.291 { 00:11:15.291 "name": "BaseBdev4", 00:11:15.291 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:15.291 "is_configured": false, 00:11:15.291 "data_offset": 0, 00:11:15.291 "data_size": 0 00:11:15.291 } 00:11:15.291 ] 00:11:15.291 }' 00:11:15.291 13:21:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:15.291 13:21:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.857 13:21:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:15.857 13:21:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.858 13:21:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.858 [2024-11-17 13:21:04.815903] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:15.858 [2024-11-17 13:21:04.815985] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:15.858 13:21:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.858 13:21:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:15.858 13:21:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.858 13:21:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.858 [2024-11-17 13:21:04.827887] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:15.858 [2024-11-17 13:21:04.827966] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:15.858 [2024-11-17 13:21:04.828012] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:15.858 [2024-11-17 13:21:04.828038] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:15.858 [2024-11-17 13:21:04.828059] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:15.858 [2024-11-17 13:21:04.828083] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:15.858 [2024-11-17 13:21:04.828103] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:15.858 [2024-11-17 13:21:04.828167] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:15.858 13:21:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.858 13:21:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:15.858 13:21:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.858 13:21:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.858 [2024-11-17 13:21:04.872955] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:15.858 BaseBdev1 00:11:15.858 13:21:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.858 13:21:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:15.858 13:21:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:15.858 13:21:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:15.858 13:21:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:15.858 13:21:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:15.858 13:21:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:15.858 13:21:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:15.858 13:21:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.858 13:21:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.858 13:21:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.858 13:21:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:15.858 13:21:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.858 13:21:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.858 [ 00:11:15.858 { 00:11:15.858 "name": "BaseBdev1", 00:11:15.858 "aliases": [ 00:11:15.858 "93179177-625a-4e90-ab61-675415864946" 00:11:15.858 ], 00:11:15.858 "product_name": "Malloc disk", 00:11:15.858 "block_size": 512, 00:11:15.858 "num_blocks": 65536, 00:11:15.858 "uuid": "93179177-625a-4e90-ab61-675415864946", 00:11:15.858 "assigned_rate_limits": { 00:11:15.858 "rw_ios_per_sec": 0, 00:11:15.858 "rw_mbytes_per_sec": 0, 00:11:15.858 "r_mbytes_per_sec": 0, 00:11:15.858 "w_mbytes_per_sec": 0 00:11:15.858 }, 00:11:15.858 "claimed": true, 00:11:15.858 "claim_type": "exclusive_write", 00:11:15.858 "zoned": false, 00:11:15.858 "supported_io_types": { 00:11:15.858 "read": true, 00:11:15.858 "write": true, 00:11:15.858 "unmap": true, 00:11:15.858 "flush": true, 00:11:15.858 "reset": true, 00:11:15.858 "nvme_admin": false, 00:11:15.858 "nvme_io": false, 00:11:15.858 "nvme_io_md": false, 00:11:15.858 "write_zeroes": true, 00:11:15.858 "zcopy": true, 00:11:15.858 "get_zone_info": false, 00:11:15.858 "zone_management": false, 00:11:15.858 "zone_append": false, 00:11:15.858 "compare": false, 00:11:15.858 "compare_and_write": false, 00:11:15.858 "abort": true, 00:11:15.858 "seek_hole": false, 00:11:15.858 "seek_data": false, 00:11:15.858 "copy": true, 00:11:15.858 "nvme_iov_md": false 00:11:15.858 }, 00:11:15.858 "memory_domains": [ 00:11:15.858 { 00:11:15.858 "dma_device_id": "system", 00:11:15.858 "dma_device_type": 1 00:11:15.858 }, 00:11:15.858 { 00:11:15.858 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:15.858 "dma_device_type": 2 00:11:15.858 } 00:11:15.858 ], 00:11:15.858 "driver_specific": {} 00:11:15.858 } 00:11:15.858 ] 00:11:15.858 13:21:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.858 13:21:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:15.858 13:21:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:15.858 13:21:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:15.858 13:21:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:15.858 13:21:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:15.858 13:21:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:15.858 13:21:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:15.858 13:21:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:15.858 13:21:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:15.858 13:21:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:15.858 13:21:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:15.858 13:21:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:15.858 13:21:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:15.858 13:21:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.858 13:21:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.858 13:21:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.858 13:21:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:15.858 "name": "Existed_Raid", 00:11:15.858 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:15.858 "strip_size_kb": 0, 00:11:15.858 "state": "configuring", 00:11:15.858 "raid_level": "raid1", 00:11:15.858 "superblock": false, 00:11:15.858 "num_base_bdevs": 4, 00:11:15.858 "num_base_bdevs_discovered": 1, 00:11:15.858 "num_base_bdevs_operational": 4, 00:11:15.858 "base_bdevs_list": [ 00:11:15.858 { 00:11:15.858 "name": "BaseBdev1", 00:11:15.858 "uuid": "93179177-625a-4e90-ab61-675415864946", 00:11:15.858 "is_configured": true, 00:11:15.858 "data_offset": 0, 00:11:15.858 "data_size": 65536 00:11:15.858 }, 00:11:15.858 { 00:11:15.858 "name": "BaseBdev2", 00:11:15.858 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:15.858 "is_configured": false, 00:11:15.858 "data_offset": 0, 00:11:15.858 "data_size": 0 00:11:15.858 }, 00:11:15.858 { 00:11:15.858 "name": "BaseBdev3", 00:11:15.858 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:15.858 "is_configured": false, 00:11:15.858 "data_offset": 0, 00:11:15.858 "data_size": 0 00:11:15.858 }, 00:11:15.858 { 00:11:15.858 "name": "BaseBdev4", 00:11:15.858 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:15.858 "is_configured": false, 00:11:15.858 "data_offset": 0, 00:11:15.858 "data_size": 0 00:11:15.858 } 00:11:15.858 ] 00:11:15.858 }' 00:11:15.858 13:21:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:15.858 13:21:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.426 13:21:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:16.426 13:21:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.426 13:21:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.426 [2024-11-17 13:21:05.368168] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:16.426 [2024-11-17 13:21:05.368241] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:16.426 13:21:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.426 13:21:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:16.426 13:21:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.426 13:21:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.426 [2024-11-17 13:21:05.380184] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:16.426 [2024-11-17 13:21:05.382173] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:16.426 [2024-11-17 13:21:05.382234] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:16.426 [2024-11-17 13:21:05.382247] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:16.426 [2024-11-17 13:21:05.382261] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:16.426 [2024-11-17 13:21:05.382268] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:16.426 [2024-11-17 13:21:05.382277] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:16.426 13:21:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.426 13:21:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:16.426 13:21:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:16.426 13:21:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:16.426 13:21:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:16.426 13:21:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:16.426 13:21:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:16.426 13:21:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:16.426 13:21:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:16.426 13:21:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:16.426 13:21:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:16.426 13:21:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:16.426 13:21:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:16.426 13:21:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:16.426 13:21:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.426 13:21:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.426 13:21:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:16.426 13:21:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.426 13:21:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:16.426 "name": "Existed_Raid", 00:11:16.426 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:16.426 "strip_size_kb": 0, 00:11:16.426 "state": "configuring", 00:11:16.426 "raid_level": "raid1", 00:11:16.426 "superblock": false, 00:11:16.426 "num_base_bdevs": 4, 00:11:16.426 "num_base_bdevs_discovered": 1, 00:11:16.426 "num_base_bdevs_operational": 4, 00:11:16.426 "base_bdevs_list": [ 00:11:16.426 { 00:11:16.426 "name": "BaseBdev1", 00:11:16.426 "uuid": "93179177-625a-4e90-ab61-675415864946", 00:11:16.426 "is_configured": true, 00:11:16.426 "data_offset": 0, 00:11:16.426 "data_size": 65536 00:11:16.426 }, 00:11:16.426 { 00:11:16.426 "name": "BaseBdev2", 00:11:16.426 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:16.426 "is_configured": false, 00:11:16.426 "data_offset": 0, 00:11:16.426 "data_size": 0 00:11:16.426 }, 00:11:16.426 { 00:11:16.426 "name": "BaseBdev3", 00:11:16.426 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:16.426 "is_configured": false, 00:11:16.426 "data_offset": 0, 00:11:16.426 "data_size": 0 00:11:16.426 }, 00:11:16.426 { 00:11:16.426 "name": "BaseBdev4", 00:11:16.426 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:16.426 "is_configured": false, 00:11:16.426 "data_offset": 0, 00:11:16.426 "data_size": 0 00:11:16.426 } 00:11:16.426 ] 00:11:16.426 }' 00:11:16.426 13:21:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:16.426 13:21:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.687 13:21:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:16.687 13:21:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.687 13:21:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.687 [2024-11-17 13:21:05.887531] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:16.687 BaseBdev2 00:11:16.687 13:21:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.687 13:21:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:16.687 13:21:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:16.687 13:21:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:16.687 13:21:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:16.687 13:21:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:16.687 13:21:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:16.687 13:21:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:16.687 13:21:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.687 13:21:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.687 13:21:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.687 13:21:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:16.687 13:21:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.687 13:21:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.962 [ 00:11:16.962 { 00:11:16.962 "name": "BaseBdev2", 00:11:16.962 "aliases": [ 00:11:16.962 "a571037d-7ee5-4fc4-9332-a68f35586a7c" 00:11:16.962 ], 00:11:16.962 "product_name": "Malloc disk", 00:11:16.962 "block_size": 512, 00:11:16.962 "num_blocks": 65536, 00:11:16.962 "uuid": "a571037d-7ee5-4fc4-9332-a68f35586a7c", 00:11:16.962 "assigned_rate_limits": { 00:11:16.962 "rw_ios_per_sec": 0, 00:11:16.962 "rw_mbytes_per_sec": 0, 00:11:16.962 "r_mbytes_per_sec": 0, 00:11:16.962 "w_mbytes_per_sec": 0 00:11:16.962 }, 00:11:16.962 "claimed": true, 00:11:16.962 "claim_type": "exclusive_write", 00:11:16.962 "zoned": false, 00:11:16.962 "supported_io_types": { 00:11:16.962 "read": true, 00:11:16.962 "write": true, 00:11:16.962 "unmap": true, 00:11:16.962 "flush": true, 00:11:16.962 "reset": true, 00:11:16.962 "nvme_admin": false, 00:11:16.962 "nvme_io": false, 00:11:16.962 "nvme_io_md": false, 00:11:16.962 "write_zeroes": true, 00:11:16.962 "zcopy": true, 00:11:16.962 "get_zone_info": false, 00:11:16.962 "zone_management": false, 00:11:16.962 "zone_append": false, 00:11:16.962 "compare": false, 00:11:16.962 "compare_and_write": false, 00:11:16.962 "abort": true, 00:11:16.962 "seek_hole": false, 00:11:16.962 "seek_data": false, 00:11:16.962 "copy": true, 00:11:16.962 "nvme_iov_md": false 00:11:16.962 }, 00:11:16.962 "memory_domains": [ 00:11:16.962 { 00:11:16.962 "dma_device_id": "system", 00:11:16.962 "dma_device_type": 1 00:11:16.962 }, 00:11:16.962 { 00:11:16.962 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:16.962 "dma_device_type": 2 00:11:16.962 } 00:11:16.962 ], 00:11:16.962 "driver_specific": {} 00:11:16.962 } 00:11:16.962 ] 00:11:16.962 13:21:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.962 13:21:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:16.962 13:21:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:16.962 13:21:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:16.962 13:21:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:16.962 13:21:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:16.962 13:21:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:16.962 13:21:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:16.962 13:21:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:16.962 13:21:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:16.962 13:21:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:16.962 13:21:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:16.962 13:21:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:16.962 13:21:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:16.962 13:21:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:16.962 13:21:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:16.962 13:21:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.962 13:21:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.962 13:21:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.963 13:21:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:16.963 "name": "Existed_Raid", 00:11:16.963 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:16.963 "strip_size_kb": 0, 00:11:16.963 "state": "configuring", 00:11:16.963 "raid_level": "raid1", 00:11:16.963 "superblock": false, 00:11:16.963 "num_base_bdevs": 4, 00:11:16.963 "num_base_bdevs_discovered": 2, 00:11:16.963 "num_base_bdevs_operational": 4, 00:11:16.963 "base_bdevs_list": [ 00:11:16.963 { 00:11:16.963 "name": "BaseBdev1", 00:11:16.963 "uuid": "93179177-625a-4e90-ab61-675415864946", 00:11:16.963 "is_configured": true, 00:11:16.963 "data_offset": 0, 00:11:16.963 "data_size": 65536 00:11:16.963 }, 00:11:16.963 { 00:11:16.963 "name": "BaseBdev2", 00:11:16.963 "uuid": "a571037d-7ee5-4fc4-9332-a68f35586a7c", 00:11:16.963 "is_configured": true, 00:11:16.963 "data_offset": 0, 00:11:16.963 "data_size": 65536 00:11:16.963 }, 00:11:16.963 { 00:11:16.963 "name": "BaseBdev3", 00:11:16.963 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:16.963 "is_configured": false, 00:11:16.963 "data_offset": 0, 00:11:16.963 "data_size": 0 00:11:16.963 }, 00:11:16.963 { 00:11:16.963 "name": "BaseBdev4", 00:11:16.963 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:16.963 "is_configured": false, 00:11:16.963 "data_offset": 0, 00:11:16.963 "data_size": 0 00:11:16.963 } 00:11:16.963 ] 00:11:16.963 }' 00:11:16.963 13:21:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:16.963 13:21:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.227 13:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:17.227 13:21:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.228 13:21:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.228 [2024-11-17 13:21:06.422300] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:17.228 BaseBdev3 00:11:17.228 13:21:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.228 13:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:17.228 13:21:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:17.228 13:21:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:17.228 13:21:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:17.228 13:21:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:17.228 13:21:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:17.228 13:21:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:17.228 13:21:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.228 13:21:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.228 13:21:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.228 13:21:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:17.228 13:21:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.228 13:21:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.486 [ 00:11:17.486 { 00:11:17.486 "name": "BaseBdev3", 00:11:17.486 "aliases": [ 00:11:17.486 "cd23ba83-b00d-42e5-8a3a-a649adf49b7c" 00:11:17.486 ], 00:11:17.486 "product_name": "Malloc disk", 00:11:17.486 "block_size": 512, 00:11:17.486 "num_blocks": 65536, 00:11:17.486 "uuid": "cd23ba83-b00d-42e5-8a3a-a649adf49b7c", 00:11:17.486 "assigned_rate_limits": { 00:11:17.486 "rw_ios_per_sec": 0, 00:11:17.486 "rw_mbytes_per_sec": 0, 00:11:17.486 "r_mbytes_per_sec": 0, 00:11:17.486 "w_mbytes_per_sec": 0 00:11:17.486 }, 00:11:17.486 "claimed": true, 00:11:17.486 "claim_type": "exclusive_write", 00:11:17.486 "zoned": false, 00:11:17.486 "supported_io_types": { 00:11:17.486 "read": true, 00:11:17.486 "write": true, 00:11:17.486 "unmap": true, 00:11:17.486 "flush": true, 00:11:17.486 "reset": true, 00:11:17.486 "nvme_admin": false, 00:11:17.486 "nvme_io": false, 00:11:17.486 "nvme_io_md": false, 00:11:17.486 "write_zeroes": true, 00:11:17.486 "zcopy": true, 00:11:17.486 "get_zone_info": false, 00:11:17.486 "zone_management": false, 00:11:17.486 "zone_append": false, 00:11:17.486 "compare": false, 00:11:17.486 "compare_and_write": false, 00:11:17.486 "abort": true, 00:11:17.486 "seek_hole": false, 00:11:17.486 "seek_data": false, 00:11:17.486 "copy": true, 00:11:17.486 "nvme_iov_md": false 00:11:17.486 }, 00:11:17.486 "memory_domains": [ 00:11:17.486 { 00:11:17.486 "dma_device_id": "system", 00:11:17.486 "dma_device_type": 1 00:11:17.486 }, 00:11:17.486 { 00:11:17.486 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:17.486 "dma_device_type": 2 00:11:17.486 } 00:11:17.486 ], 00:11:17.486 "driver_specific": {} 00:11:17.486 } 00:11:17.486 ] 00:11:17.486 13:21:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.486 13:21:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:17.486 13:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:17.486 13:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:17.486 13:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:17.486 13:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:17.486 13:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:17.486 13:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:17.486 13:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:17.486 13:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:17.486 13:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:17.486 13:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:17.486 13:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:17.486 13:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:17.486 13:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.486 13:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:17.486 13:21:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.486 13:21:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.486 13:21:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.486 13:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:17.486 "name": "Existed_Raid", 00:11:17.486 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:17.486 "strip_size_kb": 0, 00:11:17.486 "state": "configuring", 00:11:17.486 "raid_level": "raid1", 00:11:17.486 "superblock": false, 00:11:17.486 "num_base_bdevs": 4, 00:11:17.486 "num_base_bdevs_discovered": 3, 00:11:17.486 "num_base_bdevs_operational": 4, 00:11:17.486 "base_bdevs_list": [ 00:11:17.486 { 00:11:17.486 "name": "BaseBdev1", 00:11:17.486 "uuid": "93179177-625a-4e90-ab61-675415864946", 00:11:17.486 "is_configured": true, 00:11:17.486 "data_offset": 0, 00:11:17.486 "data_size": 65536 00:11:17.486 }, 00:11:17.486 { 00:11:17.486 "name": "BaseBdev2", 00:11:17.486 "uuid": "a571037d-7ee5-4fc4-9332-a68f35586a7c", 00:11:17.486 "is_configured": true, 00:11:17.486 "data_offset": 0, 00:11:17.486 "data_size": 65536 00:11:17.486 }, 00:11:17.486 { 00:11:17.486 "name": "BaseBdev3", 00:11:17.486 "uuid": "cd23ba83-b00d-42e5-8a3a-a649adf49b7c", 00:11:17.486 "is_configured": true, 00:11:17.486 "data_offset": 0, 00:11:17.486 "data_size": 65536 00:11:17.486 }, 00:11:17.486 { 00:11:17.486 "name": "BaseBdev4", 00:11:17.486 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:17.486 "is_configured": false, 00:11:17.486 "data_offset": 0, 00:11:17.486 "data_size": 0 00:11:17.486 } 00:11:17.486 ] 00:11:17.486 }' 00:11:17.486 13:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:17.486 13:21:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.745 13:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:17.745 13:21:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.745 13:21:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.745 [2024-11-17 13:21:06.904331] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:17.745 [2024-11-17 13:21:06.904443] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:17.745 [2024-11-17 13:21:06.904455] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:11:17.745 [2024-11-17 13:21:06.904840] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:17.745 [2024-11-17 13:21:06.905011] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:17.745 [2024-11-17 13:21:06.905025] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:17.745 [2024-11-17 13:21:06.905304] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:17.745 BaseBdev4 00:11:17.745 13:21:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.745 13:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:11:17.745 13:21:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:17.745 13:21:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:17.745 13:21:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:17.745 13:21:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:17.745 13:21:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:17.745 13:21:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:17.745 13:21:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.745 13:21:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.745 13:21:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.745 13:21:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:17.745 13:21:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.745 13:21:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.745 [ 00:11:17.745 { 00:11:17.745 "name": "BaseBdev4", 00:11:17.745 "aliases": [ 00:11:17.745 "eaa05262-5f5a-4737-b2d0-145dbfa337e9" 00:11:17.745 ], 00:11:17.745 "product_name": "Malloc disk", 00:11:17.745 "block_size": 512, 00:11:17.745 "num_blocks": 65536, 00:11:17.745 "uuid": "eaa05262-5f5a-4737-b2d0-145dbfa337e9", 00:11:17.745 "assigned_rate_limits": { 00:11:17.745 "rw_ios_per_sec": 0, 00:11:17.745 "rw_mbytes_per_sec": 0, 00:11:17.745 "r_mbytes_per_sec": 0, 00:11:17.745 "w_mbytes_per_sec": 0 00:11:17.745 }, 00:11:17.745 "claimed": true, 00:11:17.745 "claim_type": "exclusive_write", 00:11:17.745 "zoned": false, 00:11:17.745 "supported_io_types": { 00:11:17.745 "read": true, 00:11:17.745 "write": true, 00:11:17.745 "unmap": true, 00:11:17.745 "flush": true, 00:11:17.745 "reset": true, 00:11:17.745 "nvme_admin": false, 00:11:17.745 "nvme_io": false, 00:11:17.745 "nvme_io_md": false, 00:11:17.745 "write_zeroes": true, 00:11:17.745 "zcopy": true, 00:11:17.745 "get_zone_info": false, 00:11:17.745 "zone_management": false, 00:11:17.745 "zone_append": false, 00:11:17.745 "compare": false, 00:11:17.745 "compare_and_write": false, 00:11:17.745 "abort": true, 00:11:17.745 "seek_hole": false, 00:11:17.745 "seek_data": false, 00:11:17.745 "copy": true, 00:11:17.745 "nvme_iov_md": false 00:11:17.745 }, 00:11:17.745 "memory_domains": [ 00:11:17.745 { 00:11:17.745 "dma_device_id": "system", 00:11:17.745 "dma_device_type": 1 00:11:17.745 }, 00:11:17.745 { 00:11:17.745 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:17.745 "dma_device_type": 2 00:11:17.745 } 00:11:17.745 ], 00:11:17.745 "driver_specific": {} 00:11:17.745 } 00:11:17.745 ] 00:11:17.745 13:21:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.745 13:21:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:17.745 13:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:17.745 13:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:17.745 13:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:11:17.745 13:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:17.745 13:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:17.745 13:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:17.745 13:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:17.745 13:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:17.745 13:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:17.745 13:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:17.745 13:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:17.745 13:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:17.745 13:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.745 13:21:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.745 13:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:17.745 13:21:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.003 13:21:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.003 13:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:18.003 "name": "Existed_Raid", 00:11:18.003 "uuid": "d32c76a6-43e9-4146-8d3c-e5212190fdbe", 00:11:18.003 "strip_size_kb": 0, 00:11:18.003 "state": "online", 00:11:18.003 "raid_level": "raid1", 00:11:18.003 "superblock": false, 00:11:18.003 "num_base_bdevs": 4, 00:11:18.003 "num_base_bdevs_discovered": 4, 00:11:18.003 "num_base_bdevs_operational": 4, 00:11:18.003 "base_bdevs_list": [ 00:11:18.003 { 00:11:18.003 "name": "BaseBdev1", 00:11:18.003 "uuid": "93179177-625a-4e90-ab61-675415864946", 00:11:18.003 "is_configured": true, 00:11:18.003 "data_offset": 0, 00:11:18.003 "data_size": 65536 00:11:18.003 }, 00:11:18.003 { 00:11:18.003 "name": "BaseBdev2", 00:11:18.003 "uuid": "a571037d-7ee5-4fc4-9332-a68f35586a7c", 00:11:18.003 "is_configured": true, 00:11:18.003 "data_offset": 0, 00:11:18.003 "data_size": 65536 00:11:18.003 }, 00:11:18.003 { 00:11:18.003 "name": "BaseBdev3", 00:11:18.003 "uuid": "cd23ba83-b00d-42e5-8a3a-a649adf49b7c", 00:11:18.003 "is_configured": true, 00:11:18.003 "data_offset": 0, 00:11:18.003 "data_size": 65536 00:11:18.003 }, 00:11:18.003 { 00:11:18.003 "name": "BaseBdev4", 00:11:18.003 "uuid": "eaa05262-5f5a-4737-b2d0-145dbfa337e9", 00:11:18.003 "is_configured": true, 00:11:18.003 "data_offset": 0, 00:11:18.003 "data_size": 65536 00:11:18.003 } 00:11:18.003 ] 00:11:18.003 }' 00:11:18.003 13:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:18.003 13:21:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.261 13:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:18.261 13:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:18.261 13:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:18.261 13:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:18.261 13:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:18.261 13:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:18.261 13:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:18.261 13:21:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.261 13:21:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.261 13:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:18.261 [2024-11-17 13:21:07.359981] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:18.261 13:21:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.261 13:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:18.261 "name": "Existed_Raid", 00:11:18.261 "aliases": [ 00:11:18.262 "d32c76a6-43e9-4146-8d3c-e5212190fdbe" 00:11:18.262 ], 00:11:18.262 "product_name": "Raid Volume", 00:11:18.262 "block_size": 512, 00:11:18.262 "num_blocks": 65536, 00:11:18.262 "uuid": "d32c76a6-43e9-4146-8d3c-e5212190fdbe", 00:11:18.262 "assigned_rate_limits": { 00:11:18.262 "rw_ios_per_sec": 0, 00:11:18.262 "rw_mbytes_per_sec": 0, 00:11:18.262 "r_mbytes_per_sec": 0, 00:11:18.262 "w_mbytes_per_sec": 0 00:11:18.262 }, 00:11:18.262 "claimed": false, 00:11:18.262 "zoned": false, 00:11:18.262 "supported_io_types": { 00:11:18.262 "read": true, 00:11:18.262 "write": true, 00:11:18.262 "unmap": false, 00:11:18.262 "flush": false, 00:11:18.262 "reset": true, 00:11:18.262 "nvme_admin": false, 00:11:18.262 "nvme_io": false, 00:11:18.262 "nvme_io_md": false, 00:11:18.262 "write_zeroes": true, 00:11:18.262 "zcopy": false, 00:11:18.262 "get_zone_info": false, 00:11:18.262 "zone_management": false, 00:11:18.262 "zone_append": false, 00:11:18.262 "compare": false, 00:11:18.262 "compare_and_write": false, 00:11:18.262 "abort": false, 00:11:18.262 "seek_hole": false, 00:11:18.262 "seek_data": false, 00:11:18.262 "copy": false, 00:11:18.262 "nvme_iov_md": false 00:11:18.262 }, 00:11:18.262 "memory_domains": [ 00:11:18.262 { 00:11:18.262 "dma_device_id": "system", 00:11:18.262 "dma_device_type": 1 00:11:18.262 }, 00:11:18.262 { 00:11:18.262 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:18.262 "dma_device_type": 2 00:11:18.262 }, 00:11:18.262 { 00:11:18.262 "dma_device_id": "system", 00:11:18.262 "dma_device_type": 1 00:11:18.262 }, 00:11:18.262 { 00:11:18.262 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:18.262 "dma_device_type": 2 00:11:18.262 }, 00:11:18.262 { 00:11:18.262 "dma_device_id": "system", 00:11:18.262 "dma_device_type": 1 00:11:18.262 }, 00:11:18.262 { 00:11:18.262 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:18.262 "dma_device_type": 2 00:11:18.262 }, 00:11:18.262 { 00:11:18.262 "dma_device_id": "system", 00:11:18.262 "dma_device_type": 1 00:11:18.262 }, 00:11:18.262 { 00:11:18.262 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:18.262 "dma_device_type": 2 00:11:18.262 } 00:11:18.262 ], 00:11:18.262 "driver_specific": { 00:11:18.262 "raid": { 00:11:18.262 "uuid": "d32c76a6-43e9-4146-8d3c-e5212190fdbe", 00:11:18.262 "strip_size_kb": 0, 00:11:18.262 "state": "online", 00:11:18.262 "raid_level": "raid1", 00:11:18.262 "superblock": false, 00:11:18.262 "num_base_bdevs": 4, 00:11:18.262 "num_base_bdevs_discovered": 4, 00:11:18.262 "num_base_bdevs_operational": 4, 00:11:18.262 "base_bdevs_list": [ 00:11:18.262 { 00:11:18.262 "name": "BaseBdev1", 00:11:18.262 "uuid": "93179177-625a-4e90-ab61-675415864946", 00:11:18.262 "is_configured": true, 00:11:18.262 "data_offset": 0, 00:11:18.262 "data_size": 65536 00:11:18.262 }, 00:11:18.262 { 00:11:18.262 "name": "BaseBdev2", 00:11:18.262 "uuid": "a571037d-7ee5-4fc4-9332-a68f35586a7c", 00:11:18.262 "is_configured": true, 00:11:18.262 "data_offset": 0, 00:11:18.262 "data_size": 65536 00:11:18.262 }, 00:11:18.262 { 00:11:18.262 "name": "BaseBdev3", 00:11:18.262 "uuid": "cd23ba83-b00d-42e5-8a3a-a649adf49b7c", 00:11:18.262 "is_configured": true, 00:11:18.262 "data_offset": 0, 00:11:18.262 "data_size": 65536 00:11:18.262 }, 00:11:18.262 { 00:11:18.262 "name": "BaseBdev4", 00:11:18.262 "uuid": "eaa05262-5f5a-4737-b2d0-145dbfa337e9", 00:11:18.262 "is_configured": true, 00:11:18.262 "data_offset": 0, 00:11:18.262 "data_size": 65536 00:11:18.262 } 00:11:18.262 ] 00:11:18.262 } 00:11:18.262 } 00:11:18.262 }' 00:11:18.262 13:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:18.262 13:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:18.262 BaseBdev2 00:11:18.262 BaseBdev3 00:11:18.262 BaseBdev4' 00:11:18.262 13:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:18.262 13:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:18.262 13:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:18.262 13:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:18.262 13:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:18.262 13:21:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.262 13:21:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.520 13:21:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.520 13:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:18.520 13:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:18.520 13:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:18.520 13:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:18.520 13:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:18.520 13:21:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.520 13:21:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.520 13:21:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.520 13:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:18.520 13:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:18.520 13:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:18.520 13:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:18.520 13:21:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.520 13:21:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.520 13:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:18.520 13:21:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.520 13:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:18.520 13:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:18.520 13:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:18.520 13:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:18.520 13:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:18.520 13:21:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.520 13:21:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.520 13:21:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.520 13:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:18.520 13:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:18.520 13:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:18.520 13:21:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.520 13:21:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.520 [2024-11-17 13:21:07.671173] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:18.778 13:21:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.778 13:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:18.778 13:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:11:18.778 13:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:18.778 13:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:18.778 13:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:11:18.778 13:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:11:18.778 13:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:18.778 13:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:18.778 13:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:18.778 13:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:18.778 13:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:18.778 13:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:18.778 13:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:18.778 13:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:18.778 13:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:18.778 13:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:18.778 13:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:18.778 13:21:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.778 13:21:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.778 13:21:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.778 13:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:18.778 "name": "Existed_Raid", 00:11:18.778 "uuid": "d32c76a6-43e9-4146-8d3c-e5212190fdbe", 00:11:18.778 "strip_size_kb": 0, 00:11:18.778 "state": "online", 00:11:18.779 "raid_level": "raid1", 00:11:18.779 "superblock": false, 00:11:18.779 "num_base_bdevs": 4, 00:11:18.779 "num_base_bdevs_discovered": 3, 00:11:18.779 "num_base_bdevs_operational": 3, 00:11:18.779 "base_bdevs_list": [ 00:11:18.779 { 00:11:18.779 "name": null, 00:11:18.779 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:18.779 "is_configured": false, 00:11:18.779 "data_offset": 0, 00:11:18.779 "data_size": 65536 00:11:18.779 }, 00:11:18.779 { 00:11:18.779 "name": "BaseBdev2", 00:11:18.779 "uuid": "a571037d-7ee5-4fc4-9332-a68f35586a7c", 00:11:18.779 "is_configured": true, 00:11:18.779 "data_offset": 0, 00:11:18.779 "data_size": 65536 00:11:18.779 }, 00:11:18.779 { 00:11:18.779 "name": "BaseBdev3", 00:11:18.779 "uuid": "cd23ba83-b00d-42e5-8a3a-a649adf49b7c", 00:11:18.779 "is_configured": true, 00:11:18.779 "data_offset": 0, 00:11:18.779 "data_size": 65536 00:11:18.779 }, 00:11:18.779 { 00:11:18.779 "name": "BaseBdev4", 00:11:18.779 "uuid": "eaa05262-5f5a-4737-b2d0-145dbfa337e9", 00:11:18.779 "is_configured": true, 00:11:18.779 "data_offset": 0, 00:11:18.779 "data_size": 65536 00:11:18.779 } 00:11:18.779 ] 00:11:18.779 }' 00:11:18.779 13:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:18.779 13:21:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.037 13:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:19.037 13:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:19.037 13:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:19.037 13:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:19.037 13:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.037 13:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.037 13:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.037 13:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:19.037 13:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:19.037 13:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:19.037 13:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.037 13:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.037 [2024-11-17 13:21:08.235050] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:19.295 13:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.295 13:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:19.295 13:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:19.295 13:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:19.295 13:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.295 13:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.295 13:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:19.295 13:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.295 13:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:19.295 13:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:19.295 13:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:19.295 13:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.295 13:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.295 [2024-11-17 13:21:08.388592] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:19.295 13:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.295 13:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:19.295 13:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:19.295 13:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:19.295 13:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.295 13:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.295 13:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:19.295 13:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.554 13:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:19.554 13:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:19.554 13:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:11:19.554 13:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.554 13:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.554 [2024-11-17 13:21:08.551476] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:11:19.554 [2024-11-17 13:21:08.551635] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:19.554 [2024-11-17 13:21:08.649073] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:19.554 [2024-11-17 13:21:08.649126] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:19.554 [2024-11-17 13:21:08.649139] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:19.554 13:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.554 13:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:19.554 13:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:19.554 13:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:19.554 13:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:19.554 13:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.554 13:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.554 13:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.554 13:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:19.554 13:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:19.554 13:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:11:19.554 13:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:19.554 13:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:19.554 13:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:19.554 13:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.554 13:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.554 BaseBdev2 00:11:19.554 13:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.554 13:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:19.554 13:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:19.554 13:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:19.554 13:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:19.554 13:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:19.554 13:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:19.554 13:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:19.554 13:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.554 13:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.554 13:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.554 13:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:19.554 13:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.554 13:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.554 [ 00:11:19.554 { 00:11:19.554 "name": "BaseBdev2", 00:11:19.554 "aliases": [ 00:11:19.554 "7de5ade6-45f1-4874-8d97-0c684eb51861" 00:11:19.554 ], 00:11:19.554 "product_name": "Malloc disk", 00:11:19.554 "block_size": 512, 00:11:19.554 "num_blocks": 65536, 00:11:19.554 "uuid": "7de5ade6-45f1-4874-8d97-0c684eb51861", 00:11:19.554 "assigned_rate_limits": { 00:11:19.554 "rw_ios_per_sec": 0, 00:11:19.554 "rw_mbytes_per_sec": 0, 00:11:19.554 "r_mbytes_per_sec": 0, 00:11:19.554 "w_mbytes_per_sec": 0 00:11:19.554 }, 00:11:19.554 "claimed": false, 00:11:19.554 "zoned": false, 00:11:19.554 "supported_io_types": { 00:11:19.554 "read": true, 00:11:19.554 "write": true, 00:11:19.554 "unmap": true, 00:11:19.554 "flush": true, 00:11:19.554 "reset": true, 00:11:19.554 "nvme_admin": false, 00:11:19.554 "nvme_io": false, 00:11:19.554 "nvme_io_md": false, 00:11:19.554 "write_zeroes": true, 00:11:19.554 "zcopy": true, 00:11:19.554 "get_zone_info": false, 00:11:19.554 "zone_management": false, 00:11:19.554 "zone_append": false, 00:11:19.554 "compare": false, 00:11:19.554 "compare_and_write": false, 00:11:19.554 "abort": true, 00:11:19.554 "seek_hole": false, 00:11:19.554 "seek_data": false, 00:11:19.554 "copy": true, 00:11:19.554 "nvme_iov_md": false 00:11:19.554 }, 00:11:19.554 "memory_domains": [ 00:11:19.554 { 00:11:19.813 "dma_device_id": "system", 00:11:19.813 "dma_device_type": 1 00:11:19.813 }, 00:11:19.813 { 00:11:19.813 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:19.813 "dma_device_type": 2 00:11:19.813 } 00:11:19.813 ], 00:11:19.813 "driver_specific": {} 00:11:19.813 } 00:11:19.813 ] 00:11:19.813 13:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.813 13:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:19.813 13:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:19.813 13:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:19.813 13:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:19.813 13:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.813 13:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.813 BaseBdev3 00:11:19.813 13:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.813 13:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:19.813 13:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:19.813 13:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:19.813 13:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:19.813 13:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:19.813 13:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:19.813 13:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:19.813 13:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.813 13:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.813 13:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.813 13:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:19.813 13:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.813 13:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.813 [ 00:11:19.813 { 00:11:19.813 "name": "BaseBdev3", 00:11:19.813 "aliases": [ 00:11:19.813 "cbc1eb39-1931-4cec-a57c-8cf5654d7bf0" 00:11:19.813 ], 00:11:19.813 "product_name": "Malloc disk", 00:11:19.813 "block_size": 512, 00:11:19.813 "num_blocks": 65536, 00:11:19.813 "uuid": "cbc1eb39-1931-4cec-a57c-8cf5654d7bf0", 00:11:19.813 "assigned_rate_limits": { 00:11:19.813 "rw_ios_per_sec": 0, 00:11:19.813 "rw_mbytes_per_sec": 0, 00:11:19.813 "r_mbytes_per_sec": 0, 00:11:19.813 "w_mbytes_per_sec": 0 00:11:19.813 }, 00:11:19.813 "claimed": false, 00:11:19.813 "zoned": false, 00:11:19.813 "supported_io_types": { 00:11:19.813 "read": true, 00:11:19.813 "write": true, 00:11:19.813 "unmap": true, 00:11:19.813 "flush": true, 00:11:19.813 "reset": true, 00:11:19.813 "nvme_admin": false, 00:11:19.813 "nvme_io": false, 00:11:19.813 "nvme_io_md": false, 00:11:19.813 "write_zeroes": true, 00:11:19.813 "zcopy": true, 00:11:19.813 "get_zone_info": false, 00:11:19.813 "zone_management": false, 00:11:19.813 "zone_append": false, 00:11:19.813 "compare": false, 00:11:19.813 "compare_and_write": false, 00:11:19.813 "abort": true, 00:11:19.813 "seek_hole": false, 00:11:19.813 "seek_data": false, 00:11:19.813 "copy": true, 00:11:19.813 "nvme_iov_md": false 00:11:19.813 }, 00:11:19.813 "memory_domains": [ 00:11:19.813 { 00:11:19.813 "dma_device_id": "system", 00:11:19.813 "dma_device_type": 1 00:11:19.813 }, 00:11:19.813 { 00:11:19.813 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:19.813 "dma_device_type": 2 00:11:19.813 } 00:11:19.813 ], 00:11:19.813 "driver_specific": {} 00:11:19.813 } 00:11:19.813 ] 00:11:19.813 13:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.813 13:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:19.813 13:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:19.813 13:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:19.813 13:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:19.813 13:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.813 13:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.813 BaseBdev4 00:11:19.813 13:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.813 13:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:11:19.813 13:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:19.813 13:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:19.813 13:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:19.813 13:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:19.813 13:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:19.813 13:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:19.813 13:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.813 13:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.813 13:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.813 13:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:19.813 13:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.813 13:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.813 [ 00:11:19.813 { 00:11:19.813 "name": "BaseBdev4", 00:11:19.813 "aliases": [ 00:11:19.813 "c94cc5e9-5951-475f-a208-7a3ba41d9269" 00:11:19.813 ], 00:11:19.813 "product_name": "Malloc disk", 00:11:19.813 "block_size": 512, 00:11:19.813 "num_blocks": 65536, 00:11:19.813 "uuid": "c94cc5e9-5951-475f-a208-7a3ba41d9269", 00:11:19.813 "assigned_rate_limits": { 00:11:19.813 "rw_ios_per_sec": 0, 00:11:19.813 "rw_mbytes_per_sec": 0, 00:11:19.813 "r_mbytes_per_sec": 0, 00:11:19.813 "w_mbytes_per_sec": 0 00:11:19.813 }, 00:11:19.813 "claimed": false, 00:11:19.813 "zoned": false, 00:11:19.813 "supported_io_types": { 00:11:19.813 "read": true, 00:11:19.813 "write": true, 00:11:19.813 "unmap": true, 00:11:19.813 "flush": true, 00:11:19.813 "reset": true, 00:11:19.813 "nvme_admin": false, 00:11:19.813 "nvme_io": false, 00:11:19.813 "nvme_io_md": false, 00:11:19.813 "write_zeroes": true, 00:11:19.813 "zcopy": true, 00:11:19.813 "get_zone_info": false, 00:11:19.813 "zone_management": false, 00:11:19.813 "zone_append": false, 00:11:19.813 "compare": false, 00:11:19.813 "compare_and_write": false, 00:11:19.813 "abort": true, 00:11:19.813 "seek_hole": false, 00:11:19.813 "seek_data": false, 00:11:19.813 "copy": true, 00:11:19.813 "nvme_iov_md": false 00:11:19.813 }, 00:11:19.813 "memory_domains": [ 00:11:19.813 { 00:11:19.813 "dma_device_id": "system", 00:11:19.813 "dma_device_type": 1 00:11:19.813 }, 00:11:19.813 { 00:11:19.813 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:19.813 "dma_device_type": 2 00:11:19.813 } 00:11:19.813 ], 00:11:19.813 "driver_specific": {} 00:11:19.813 } 00:11:19.813 ] 00:11:19.813 13:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.813 13:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:19.813 13:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:19.813 13:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:19.813 13:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:19.813 13:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.813 13:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.813 [2024-11-17 13:21:08.946836] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:19.813 [2024-11-17 13:21:08.946924] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:19.813 [2024-11-17 13:21:08.946966] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:19.813 [2024-11-17 13:21:08.948754] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:19.813 [2024-11-17 13:21:08.948840] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:19.813 13:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.813 13:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:19.813 13:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:19.813 13:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:19.813 13:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:19.813 13:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:19.814 13:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:19.814 13:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:19.814 13:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:19.814 13:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:19.814 13:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:19.814 13:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:19.814 13:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:19.814 13:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.814 13:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.814 13:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.814 13:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:19.814 "name": "Existed_Raid", 00:11:19.814 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:19.814 "strip_size_kb": 0, 00:11:19.814 "state": "configuring", 00:11:19.814 "raid_level": "raid1", 00:11:19.814 "superblock": false, 00:11:19.814 "num_base_bdevs": 4, 00:11:19.814 "num_base_bdevs_discovered": 3, 00:11:19.814 "num_base_bdevs_operational": 4, 00:11:19.814 "base_bdevs_list": [ 00:11:19.814 { 00:11:19.814 "name": "BaseBdev1", 00:11:19.814 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:19.814 "is_configured": false, 00:11:19.814 "data_offset": 0, 00:11:19.814 "data_size": 0 00:11:19.814 }, 00:11:19.814 { 00:11:19.814 "name": "BaseBdev2", 00:11:19.814 "uuid": "7de5ade6-45f1-4874-8d97-0c684eb51861", 00:11:19.814 "is_configured": true, 00:11:19.814 "data_offset": 0, 00:11:19.814 "data_size": 65536 00:11:19.814 }, 00:11:19.814 { 00:11:19.814 "name": "BaseBdev3", 00:11:19.814 "uuid": "cbc1eb39-1931-4cec-a57c-8cf5654d7bf0", 00:11:19.814 "is_configured": true, 00:11:19.814 "data_offset": 0, 00:11:19.814 "data_size": 65536 00:11:19.814 }, 00:11:19.814 { 00:11:19.814 "name": "BaseBdev4", 00:11:19.814 "uuid": "c94cc5e9-5951-475f-a208-7a3ba41d9269", 00:11:19.814 "is_configured": true, 00:11:19.814 "data_offset": 0, 00:11:19.814 "data_size": 65536 00:11:19.814 } 00:11:19.814 ] 00:11:19.814 }' 00:11:19.814 13:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:19.814 13:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.379 13:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:20.379 13:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.379 13:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.379 [2024-11-17 13:21:09.378088] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:20.379 13:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.379 13:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:20.379 13:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:20.379 13:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:20.379 13:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:20.379 13:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:20.379 13:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:20.379 13:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:20.379 13:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:20.379 13:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:20.379 13:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:20.379 13:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:20.379 13:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:20.379 13:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.379 13:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.379 13:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.379 13:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:20.379 "name": "Existed_Raid", 00:11:20.379 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:20.379 "strip_size_kb": 0, 00:11:20.379 "state": "configuring", 00:11:20.379 "raid_level": "raid1", 00:11:20.379 "superblock": false, 00:11:20.379 "num_base_bdevs": 4, 00:11:20.379 "num_base_bdevs_discovered": 2, 00:11:20.379 "num_base_bdevs_operational": 4, 00:11:20.379 "base_bdevs_list": [ 00:11:20.379 { 00:11:20.379 "name": "BaseBdev1", 00:11:20.379 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:20.379 "is_configured": false, 00:11:20.379 "data_offset": 0, 00:11:20.379 "data_size": 0 00:11:20.379 }, 00:11:20.379 { 00:11:20.379 "name": null, 00:11:20.379 "uuid": "7de5ade6-45f1-4874-8d97-0c684eb51861", 00:11:20.379 "is_configured": false, 00:11:20.379 "data_offset": 0, 00:11:20.379 "data_size": 65536 00:11:20.379 }, 00:11:20.379 { 00:11:20.379 "name": "BaseBdev3", 00:11:20.379 "uuid": "cbc1eb39-1931-4cec-a57c-8cf5654d7bf0", 00:11:20.379 "is_configured": true, 00:11:20.379 "data_offset": 0, 00:11:20.379 "data_size": 65536 00:11:20.379 }, 00:11:20.379 { 00:11:20.379 "name": "BaseBdev4", 00:11:20.379 "uuid": "c94cc5e9-5951-475f-a208-7a3ba41d9269", 00:11:20.379 "is_configured": true, 00:11:20.379 "data_offset": 0, 00:11:20.379 "data_size": 65536 00:11:20.379 } 00:11:20.379 ] 00:11:20.379 }' 00:11:20.379 13:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:20.379 13:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.638 13:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:20.638 13:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.638 13:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.638 13:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:20.638 13:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.638 13:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:20.638 13:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:20.638 13:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.638 13:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.896 [2024-11-17 13:21:09.866448] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:20.896 BaseBdev1 00:11:20.896 13:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.896 13:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:20.896 13:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:20.896 13:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:20.896 13:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:20.896 13:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:20.896 13:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:20.896 13:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:20.896 13:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.896 13:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.896 13:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.896 13:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:20.896 13:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.896 13:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.896 [ 00:11:20.896 { 00:11:20.896 "name": "BaseBdev1", 00:11:20.896 "aliases": [ 00:11:20.896 "6db44223-5f59-49b3-a14e-806f49b9a46f" 00:11:20.896 ], 00:11:20.896 "product_name": "Malloc disk", 00:11:20.896 "block_size": 512, 00:11:20.896 "num_blocks": 65536, 00:11:20.896 "uuid": "6db44223-5f59-49b3-a14e-806f49b9a46f", 00:11:20.896 "assigned_rate_limits": { 00:11:20.896 "rw_ios_per_sec": 0, 00:11:20.896 "rw_mbytes_per_sec": 0, 00:11:20.896 "r_mbytes_per_sec": 0, 00:11:20.896 "w_mbytes_per_sec": 0 00:11:20.896 }, 00:11:20.896 "claimed": true, 00:11:20.896 "claim_type": "exclusive_write", 00:11:20.896 "zoned": false, 00:11:20.896 "supported_io_types": { 00:11:20.896 "read": true, 00:11:20.896 "write": true, 00:11:20.896 "unmap": true, 00:11:20.896 "flush": true, 00:11:20.896 "reset": true, 00:11:20.896 "nvme_admin": false, 00:11:20.896 "nvme_io": false, 00:11:20.896 "nvme_io_md": false, 00:11:20.896 "write_zeroes": true, 00:11:20.896 "zcopy": true, 00:11:20.896 "get_zone_info": false, 00:11:20.896 "zone_management": false, 00:11:20.896 "zone_append": false, 00:11:20.896 "compare": false, 00:11:20.896 "compare_and_write": false, 00:11:20.896 "abort": true, 00:11:20.896 "seek_hole": false, 00:11:20.896 "seek_data": false, 00:11:20.896 "copy": true, 00:11:20.896 "nvme_iov_md": false 00:11:20.896 }, 00:11:20.896 "memory_domains": [ 00:11:20.896 { 00:11:20.896 "dma_device_id": "system", 00:11:20.896 "dma_device_type": 1 00:11:20.896 }, 00:11:20.896 { 00:11:20.896 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:20.896 "dma_device_type": 2 00:11:20.896 } 00:11:20.896 ], 00:11:20.896 "driver_specific": {} 00:11:20.896 } 00:11:20.896 ] 00:11:20.896 13:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.896 13:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:20.896 13:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:20.896 13:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:20.896 13:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:20.896 13:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:20.896 13:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:20.896 13:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:20.896 13:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:20.896 13:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:20.896 13:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:20.896 13:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:20.896 13:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:20.896 13:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:20.896 13:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.896 13:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.896 13:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.896 13:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:20.896 "name": "Existed_Raid", 00:11:20.896 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:20.896 "strip_size_kb": 0, 00:11:20.896 "state": "configuring", 00:11:20.896 "raid_level": "raid1", 00:11:20.896 "superblock": false, 00:11:20.896 "num_base_bdevs": 4, 00:11:20.896 "num_base_bdevs_discovered": 3, 00:11:20.896 "num_base_bdevs_operational": 4, 00:11:20.896 "base_bdevs_list": [ 00:11:20.896 { 00:11:20.896 "name": "BaseBdev1", 00:11:20.896 "uuid": "6db44223-5f59-49b3-a14e-806f49b9a46f", 00:11:20.896 "is_configured": true, 00:11:20.896 "data_offset": 0, 00:11:20.896 "data_size": 65536 00:11:20.896 }, 00:11:20.896 { 00:11:20.896 "name": null, 00:11:20.896 "uuid": "7de5ade6-45f1-4874-8d97-0c684eb51861", 00:11:20.896 "is_configured": false, 00:11:20.896 "data_offset": 0, 00:11:20.896 "data_size": 65536 00:11:20.896 }, 00:11:20.896 { 00:11:20.896 "name": "BaseBdev3", 00:11:20.896 "uuid": "cbc1eb39-1931-4cec-a57c-8cf5654d7bf0", 00:11:20.896 "is_configured": true, 00:11:20.896 "data_offset": 0, 00:11:20.896 "data_size": 65536 00:11:20.896 }, 00:11:20.896 { 00:11:20.896 "name": "BaseBdev4", 00:11:20.896 "uuid": "c94cc5e9-5951-475f-a208-7a3ba41d9269", 00:11:20.896 "is_configured": true, 00:11:20.896 "data_offset": 0, 00:11:20.896 "data_size": 65536 00:11:20.896 } 00:11:20.896 ] 00:11:20.896 }' 00:11:20.896 13:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:20.896 13:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.154 13:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:21.154 13:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:21.154 13:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.154 13:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.154 13:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.154 13:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:21.154 13:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:21.154 13:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.154 13:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.154 [2024-11-17 13:21:10.365733] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:21.154 13:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.154 13:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:21.154 13:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:21.154 13:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:21.154 13:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:21.154 13:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:21.154 13:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:21.154 13:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:21.154 13:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:21.154 13:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:21.154 13:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:21.154 13:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:21.412 13:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:21.412 13:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.412 13:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.412 13:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.412 13:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:21.412 "name": "Existed_Raid", 00:11:21.412 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:21.412 "strip_size_kb": 0, 00:11:21.412 "state": "configuring", 00:11:21.412 "raid_level": "raid1", 00:11:21.412 "superblock": false, 00:11:21.412 "num_base_bdevs": 4, 00:11:21.412 "num_base_bdevs_discovered": 2, 00:11:21.412 "num_base_bdevs_operational": 4, 00:11:21.412 "base_bdevs_list": [ 00:11:21.412 { 00:11:21.412 "name": "BaseBdev1", 00:11:21.412 "uuid": "6db44223-5f59-49b3-a14e-806f49b9a46f", 00:11:21.412 "is_configured": true, 00:11:21.412 "data_offset": 0, 00:11:21.412 "data_size": 65536 00:11:21.412 }, 00:11:21.412 { 00:11:21.412 "name": null, 00:11:21.412 "uuid": "7de5ade6-45f1-4874-8d97-0c684eb51861", 00:11:21.412 "is_configured": false, 00:11:21.412 "data_offset": 0, 00:11:21.412 "data_size": 65536 00:11:21.412 }, 00:11:21.412 { 00:11:21.412 "name": null, 00:11:21.412 "uuid": "cbc1eb39-1931-4cec-a57c-8cf5654d7bf0", 00:11:21.412 "is_configured": false, 00:11:21.412 "data_offset": 0, 00:11:21.412 "data_size": 65536 00:11:21.412 }, 00:11:21.412 { 00:11:21.412 "name": "BaseBdev4", 00:11:21.412 "uuid": "c94cc5e9-5951-475f-a208-7a3ba41d9269", 00:11:21.412 "is_configured": true, 00:11:21.412 "data_offset": 0, 00:11:21.412 "data_size": 65536 00:11:21.412 } 00:11:21.412 ] 00:11:21.412 }' 00:11:21.412 13:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:21.412 13:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.671 13:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:21.671 13:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:21.671 13:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.671 13:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.671 13:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.671 13:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:21.671 13:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:21.671 13:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.671 13:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.671 [2024-11-17 13:21:10.793102] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:21.671 13:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.671 13:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:21.671 13:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:21.671 13:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:21.671 13:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:21.671 13:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:21.671 13:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:21.671 13:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:21.671 13:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:21.671 13:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:21.671 13:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:21.671 13:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:21.671 13:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.671 13:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.671 13:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:21.671 13:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.671 13:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:21.671 "name": "Existed_Raid", 00:11:21.671 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:21.671 "strip_size_kb": 0, 00:11:21.671 "state": "configuring", 00:11:21.671 "raid_level": "raid1", 00:11:21.671 "superblock": false, 00:11:21.671 "num_base_bdevs": 4, 00:11:21.671 "num_base_bdevs_discovered": 3, 00:11:21.671 "num_base_bdevs_operational": 4, 00:11:21.671 "base_bdevs_list": [ 00:11:21.671 { 00:11:21.671 "name": "BaseBdev1", 00:11:21.671 "uuid": "6db44223-5f59-49b3-a14e-806f49b9a46f", 00:11:21.671 "is_configured": true, 00:11:21.671 "data_offset": 0, 00:11:21.671 "data_size": 65536 00:11:21.671 }, 00:11:21.671 { 00:11:21.671 "name": null, 00:11:21.671 "uuid": "7de5ade6-45f1-4874-8d97-0c684eb51861", 00:11:21.671 "is_configured": false, 00:11:21.671 "data_offset": 0, 00:11:21.671 "data_size": 65536 00:11:21.671 }, 00:11:21.671 { 00:11:21.671 "name": "BaseBdev3", 00:11:21.671 "uuid": "cbc1eb39-1931-4cec-a57c-8cf5654d7bf0", 00:11:21.671 "is_configured": true, 00:11:21.671 "data_offset": 0, 00:11:21.671 "data_size": 65536 00:11:21.671 }, 00:11:21.671 { 00:11:21.671 "name": "BaseBdev4", 00:11:21.671 "uuid": "c94cc5e9-5951-475f-a208-7a3ba41d9269", 00:11:21.671 "is_configured": true, 00:11:21.671 "data_offset": 0, 00:11:21.672 "data_size": 65536 00:11:21.672 } 00:11:21.672 ] 00:11:21.672 }' 00:11:21.672 13:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:21.672 13:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.238 13:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:22.238 13:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.238 13:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.238 13:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:22.238 13:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.238 13:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:22.238 13:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:22.238 13:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.238 13:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.238 [2024-11-17 13:21:11.192450] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:22.238 13:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.238 13:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:22.238 13:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:22.238 13:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:22.238 13:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:22.238 13:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:22.238 13:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:22.239 13:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:22.239 13:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:22.239 13:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:22.239 13:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:22.239 13:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:22.239 13:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:22.239 13:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.239 13:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.239 13:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.239 13:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:22.239 "name": "Existed_Raid", 00:11:22.239 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:22.239 "strip_size_kb": 0, 00:11:22.239 "state": "configuring", 00:11:22.239 "raid_level": "raid1", 00:11:22.239 "superblock": false, 00:11:22.239 "num_base_bdevs": 4, 00:11:22.239 "num_base_bdevs_discovered": 2, 00:11:22.239 "num_base_bdevs_operational": 4, 00:11:22.239 "base_bdevs_list": [ 00:11:22.239 { 00:11:22.239 "name": null, 00:11:22.239 "uuid": "6db44223-5f59-49b3-a14e-806f49b9a46f", 00:11:22.239 "is_configured": false, 00:11:22.239 "data_offset": 0, 00:11:22.239 "data_size": 65536 00:11:22.239 }, 00:11:22.239 { 00:11:22.239 "name": null, 00:11:22.239 "uuid": "7de5ade6-45f1-4874-8d97-0c684eb51861", 00:11:22.239 "is_configured": false, 00:11:22.239 "data_offset": 0, 00:11:22.239 "data_size": 65536 00:11:22.239 }, 00:11:22.239 { 00:11:22.239 "name": "BaseBdev3", 00:11:22.239 "uuid": "cbc1eb39-1931-4cec-a57c-8cf5654d7bf0", 00:11:22.239 "is_configured": true, 00:11:22.239 "data_offset": 0, 00:11:22.239 "data_size": 65536 00:11:22.239 }, 00:11:22.239 { 00:11:22.239 "name": "BaseBdev4", 00:11:22.239 "uuid": "c94cc5e9-5951-475f-a208-7a3ba41d9269", 00:11:22.239 "is_configured": true, 00:11:22.239 "data_offset": 0, 00:11:22.239 "data_size": 65536 00:11:22.239 } 00:11:22.239 ] 00:11:22.239 }' 00:11:22.239 13:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:22.239 13:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.804 13:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:22.804 13:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.804 13:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.804 13:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:22.804 13:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.804 13:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:22.804 13:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:22.804 13:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.804 13:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.804 [2024-11-17 13:21:11.824364] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:22.804 13:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.804 13:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:22.804 13:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:22.804 13:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:22.804 13:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:22.804 13:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:22.804 13:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:22.804 13:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:22.804 13:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:22.804 13:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:22.804 13:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:22.804 13:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:22.804 13:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:22.804 13:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.804 13:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.804 13:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.804 13:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:22.804 "name": "Existed_Raid", 00:11:22.804 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:22.804 "strip_size_kb": 0, 00:11:22.804 "state": "configuring", 00:11:22.805 "raid_level": "raid1", 00:11:22.805 "superblock": false, 00:11:22.805 "num_base_bdevs": 4, 00:11:22.805 "num_base_bdevs_discovered": 3, 00:11:22.805 "num_base_bdevs_operational": 4, 00:11:22.805 "base_bdevs_list": [ 00:11:22.805 { 00:11:22.805 "name": null, 00:11:22.805 "uuid": "6db44223-5f59-49b3-a14e-806f49b9a46f", 00:11:22.805 "is_configured": false, 00:11:22.805 "data_offset": 0, 00:11:22.805 "data_size": 65536 00:11:22.805 }, 00:11:22.805 { 00:11:22.805 "name": "BaseBdev2", 00:11:22.805 "uuid": "7de5ade6-45f1-4874-8d97-0c684eb51861", 00:11:22.805 "is_configured": true, 00:11:22.805 "data_offset": 0, 00:11:22.805 "data_size": 65536 00:11:22.805 }, 00:11:22.805 { 00:11:22.805 "name": "BaseBdev3", 00:11:22.805 "uuid": "cbc1eb39-1931-4cec-a57c-8cf5654d7bf0", 00:11:22.805 "is_configured": true, 00:11:22.805 "data_offset": 0, 00:11:22.805 "data_size": 65536 00:11:22.805 }, 00:11:22.805 { 00:11:22.805 "name": "BaseBdev4", 00:11:22.805 "uuid": "c94cc5e9-5951-475f-a208-7a3ba41d9269", 00:11:22.805 "is_configured": true, 00:11:22.805 "data_offset": 0, 00:11:22.805 "data_size": 65536 00:11:22.805 } 00:11:22.805 ] 00:11:22.805 }' 00:11:22.805 13:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:22.805 13:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.063 13:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:23.063 13:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.063 13:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.063 13:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:23.063 13:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.063 13:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:23.063 13:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:23.063 13:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:23.063 13:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.063 13:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.063 13:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.321 13:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 6db44223-5f59-49b3-a14e-806f49b9a46f 00:11:23.321 13:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.321 13:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.321 [2024-11-17 13:21:12.349526] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:23.321 [2024-11-17 13:21:12.349630] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:23.321 [2024-11-17 13:21:12.349658] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:11:23.321 [2024-11-17 13:21:12.349990] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:23.321 [2024-11-17 13:21:12.350225] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:23.321 [2024-11-17 13:21:12.350270] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:23.321 [2024-11-17 13:21:12.350579] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:23.321 NewBaseBdev 00:11:23.321 13:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.321 13:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:23.321 13:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:11:23.321 13:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:23.321 13:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:23.321 13:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:23.321 13:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:23.321 13:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:23.321 13:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.321 13:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.321 13:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.321 13:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:23.321 13:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.321 13:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.321 [ 00:11:23.321 { 00:11:23.321 "name": "NewBaseBdev", 00:11:23.321 "aliases": [ 00:11:23.321 "6db44223-5f59-49b3-a14e-806f49b9a46f" 00:11:23.321 ], 00:11:23.321 "product_name": "Malloc disk", 00:11:23.321 "block_size": 512, 00:11:23.321 "num_blocks": 65536, 00:11:23.321 "uuid": "6db44223-5f59-49b3-a14e-806f49b9a46f", 00:11:23.321 "assigned_rate_limits": { 00:11:23.321 "rw_ios_per_sec": 0, 00:11:23.321 "rw_mbytes_per_sec": 0, 00:11:23.321 "r_mbytes_per_sec": 0, 00:11:23.321 "w_mbytes_per_sec": 0 00:11:23.321 }, 00:11:23.321 "claimed": true, 00:11:23.321 "claim_type": "exclusive_write", 00:11:23.321 "zoned": false, 00:11:23.321 "supported_io_types": { 00:11:23.321 "read": true, 00:11:23.321 "write": true, 00:11:23.321 "unmap": true, 00:11:23.321 "flush": true, 00:11:23.321 "reset": true, 00:11:23.321 "nvme_admin": false, 00:11:23.321 "nvme_io": false, 00:11:23.321 "nvme_io_md": false, 00:11:23.321 "write_zeroes": true, 00:11:23.321 "zcopy": true, 00:11:23.321 "get_zone_info": false, 00:11:23.321 "zone_management": false, 00:11:23.321 "zone_append": false, 00:11:23.321 "compare": false, 00:11:23.321 "compare_and_write": false, 00:11:23.321 "abort": true, 00:11:23.321 "seek_hole": false, 00:11:23.321 "seek_data": false, 00:11:23.321 "copy": true, 00:11:23.321 "nvme_iov_md": false 00:11:23.321 }, 00:11:23.321 "memory_domains": [ 00:11:23.321 { 00:11:23.321 "dma_device_id": "system", 00:11:23.321 "dma_device_type": 1 00:11:23.321 }, 00:11:23.321 { 00:11:23.321 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:23.321 "dma_device_type": 2 00:11:23.321 } 00:11:23.321 ], 00:11:23.321 "driver_specific": {} 00:11:23.321 } 00:11:23.321 ] 00:11:23.321 13:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.321 13:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:23.321 13:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:11:23.321 13:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:23.321 13:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:23.321 13:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:23.321 13:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:23.321 13:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:23.321 13:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:23.321 13:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:23.321 13:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:23.321 13:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:23.321 13:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:23.321 13:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:23.321 13:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.321 13:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.321 13:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.321 13:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:23.321 "name": "Existed_Raid", 00:11:23.321 "uuid": "61c96a91-c720-439a-8354-f7c6cc232dd9", 00:11:23.321 "strip_size_kb": 0, 00:11:23.322 "state": "online", 00:11:23.322 "raid_level": "raid1", 00:11:23.322 "superblock": false, 00:11:23.322 "num_base_bdevs": 4, 00:11:23.322 "num_base_bdevs_discovered": 4, 00:11:23.322 "num_base_bdevs_operational": 4, 00:11:23.322 "base_bdevs_list": [ 00:11:23.322 { 00:11:23.322 "name": "NewBaseBdev", 00:11:23.322 "uuid": "6db44223-5f59-49b3-a14e-806f49b9a46f", 00:11:23.322 "is_configured": true, 00:11:23.322 "data_offset": 0, 00:11:23.322 "data_size": 65536 00:11:23.322 }, 00:11:23.322 { 00:11:23.322 "name": "BaseBdev2", 00:11:23.322 "uuid": "7de5ade6-45f1-4874-8d97-0c684eb51861", 00:11:23.322 "is_configured": true, 00:11:23.322 "data_offset": 0, 00:11:23.322 "data_size": 65536 00:11:23.322 }, 00:11:23.322 { 00:11:23.322 "name": "BaseBdev3", 00:11:23.322 "uuid": "cbc1eb39-1931-4cec-a57c-8cf5654d7bf0", 00:11:23.322 "is_configured": true, 00:11:23.322 "data_offset": 0, 00:11:23.322 "data_size": 65536 00:11:23.322 }, 00:11:23.322 { 00:11:23.322 "name": "BaseBdev4", 00:11:23.322 "uuid": "c94cc5e9-5951-475f-a208-7a3ba41d9269", 00:11:23.322 "is_configured": true, 00:11:23.322 "data_offset": 0, 00:11:23.322 "data_size": 65536 00:11:23.322 } 00:11:23.322 ] 00:11:23.322 }' 00:11:23.322 13:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:23.322 13:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.888 13:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:23.888 13:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:23.888 13:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:23.888 13:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:23.888 13:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:23.888 13:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:23.888 13:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:23.888 13:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:23.888 13:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.888 13:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.888 [2024-11-17 13:21:12.849691] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:23.889 13:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.889 13:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:23.889 "name": "Existed_Raid", 00:11:23.889 "aliases": [ 00:11:23.889 "61c96a91-c720-439a-8354-f7c6cc232dd9" 00:11:23.889 ], 00:11:23.889 "product_name": "Raid Volume", 00:11:23.889 "block_size": 512, 00:11:23.889 "num_blocks": 65536, 00:11:23.889 "uuid": "61c96a91-c720-439a-8354-f7c6cc232dd9", 00:11:23.889 "assigned_rate_limits": { 00:11:23.889 "rw_ios_per_sec": 0, 00:11:23.889 "rw_mbytes_per_sec": 0, 00:11:23.889 "r_mbytes_per_sec": 0, 00:11:23.889 "w_mbytes_per_sec": 0 00:11:23.889 }, 00:11:23.889 "claimed": false, 00:11:23.889 "zoned": false, 00:11:23.889 "supported_io_types": { 00:11:23.889 "read": true, 00:11:23.889 "write": true, 00:11:23.889 "unmap": false, 00:11:23.889 "flush": false, 00:11:23.889 "reset": true, 00:11:23.889 "nvme_admin": false, 00:11:23.889 "nvme_io": false, 00:11:23.889 "nvme_io_md": false, 00:11:23.889 "write_zeroes": true, 00:11:23.889 "zcopy": false, 00:11:23.889 "get_zone_info": false, 00:11:23.889 "zone_management": false, 00:11:23.889 "zone_append": false, 00:11:23.889 "compare": false, 00:11:23.889 "compare_and_write": false, 00:11:23.889 "abort": false, 00:11:23.889 "seek_hole": false, 00:11:23.889 "seek_data": false, 00:11:23.889 "copy": false, 00:11:23.889 "nvme_iov_md": false 00:11:23.889 }, 00:11:23.889 "memory_domains": [ 00:11:23.889 { 00:11:23.889 "dma_device_id": "system", 00:11:23.889 "dma_device_type": 1 00:11:23.889 }, 00:11:23.889 { 00:11:23.889 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:23.889 "dma_device_type": 2 00:11:23.889 }, 00:11:23.889 { 00:11:23.889 "dma_device_id": "system", 00:11:23.889 "dma_device_type": 1 00:11:23.889 }, 00:11:23.889 { 00:11:23.889 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:23.889 "dma_device_type": 2 00:11:23.889 }, 00:11:23.889 { 00:11:23.889 "dma_device_id": "system", 00:11:23.889 "dma_device_type": 1 00:11:23.889 }, 00:11:23.889 { 00:11:23.889 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:23.889 "dma_device_type": 2 00:11:23.889 }, 00:11:23.889 { 00:11:23.889 "dma_device_id": "system", 00:11:23.889 "dma_device_type": 1 00:11:23.889 }, 00:11:23.889 { 00:11:23.889 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:23.889 "dma_device_type": 2 00:11:23.889 } 00:11:23.889 ], 00:11:23.889 "driver_specific": { 00:11:23.889 "raid": { 00:11:23.889 "uuid": "61c96a91-c720-439a-8354-f7c6cc232dd9", 00:11:23.889 "strip_size_kb": 0, 00:11:23.889 "state": "online", 00:11:23.889 "raid_level": "raid1", 00:11:23.889 "superblock": false, 00:11:23.889 "num_base_bdevs": 4, 00:11:23.889 "num_base_bdevs_discovered": 4, 00:11:23.889 "num_base_bdevs_operational": 4, 00:11:23.889 "base_bdevs_list": [ 00:11:23.889 { 00:11:23.889 "name": "NewBaseBdev", 00:11:23.889 "uuid": "6db44223-5f59-49b3-a14e-806f49b9a46f", 00:11:23.889 "is_configured": true, 00:11:23.889 "data_offset": 0, 00:11:23.889 "data_size": 65536 00:11:23.889 }, 00:11:23.889 { 00:11:23.889 "name": "BaseBdev2", 00:11:23.889 "uuid": "7de5ade6-45f1-4874-8d97-0c684eb51861", 00:11:23.889 "is_configured": true, 00:11:23.889 "data_offset": 0, 00:11:23.889 "data_size": 65536 00:11:23.889 }, 00:11:23.889 { 00:11:23.889 "name": "BaseBdev3", 00:11:23.889 "uuid": "cbc1eb39-1931-4cec-a57c-8cf5654d7bf0", 00:11:23.889 "is_configured": true, 00:11:23.889 "data_offset": 0, 00:11:23.889 "data_size": 65536 00:11:23.889 }, 00:11:23.889 { 00:11:23.889 "name": "BaseBdev4", 00:11:23.889 "uuid": "c94cc5e9-5951-475f-a208-7a3ba41d9269", 00:11:23.889 "is_configured": true, 00:11:23.889 "data_offset": 0, 00:11:23.889 "data_size": 65536 00:11:23.889 } 00:11:23.889 ] 00:11:23.889 } 00:11:23.889 } 00:11:23.889 }' 00:11:23.889 13:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:23.889 13:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:23.889 BaseBdev2 00:11:23.889 BaseBdev3 00:11:23.889 BaseBdev4' 00:11:23.889 13:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:23.889 13:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:23.889 13:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:23.889 13:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:23.889 13:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:23.889 13:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.889 13:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.889 13:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.889 13:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:23.889 13:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:23.889 13:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:23.889 13:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:23.889 13:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:23.889 13:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.889 13:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.889 13:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.889 13:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:23.889 13:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:23.889 13:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:23.889 13:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:23.889 13:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:23.889 13:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.889 13:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.889 13:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.148 13:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:24.148 13:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:24.148 13:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:24.148 13:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:24.148 13:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.148 13:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.148 13:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:24.148 13:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.148 13:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:24.148 13:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:24.148 13:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:24.148 13:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.148 13:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.148 [2024-11-17 13:21:13.177345] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:24.148 [2024-11-17 13:21:13.177417] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:24.148 [2024-11-17 13:21:13.177501] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:24.148 [2024-11-17 13:21:13.177824] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:24.148 [2024-11-17 13:21:13.177841] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:24.148 13:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.148 13:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 73089 00:11:24.148 13:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 73089 ']' 00:11:24.148 13:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 73089 00:11:24.148 13:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:11:24.148 13:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:24.148 13:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73089 00:11:24.148 killing process with pid 73089 00:11:24.148 13:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:24.148 13:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:24.148 13:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73089' 00:11:24.148 13:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 73089 00:11:24.148 [2024-11-17 13:21:13.224571] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:24.148 13:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 73089 00:11:24.715 [2024-11-17 13:21:13.642536] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:25.671 13:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:11:25.671 00:11:25.671 real 0m11.366s 00:11:25.671 user 0m17.903s 00:11:25.671 sys 0m2.036s 00:11:25.671 13:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:25.671 13:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.671 ************************************ 00:11:25.671 END TEST raid_state_function_test 00:11:25.671 ************************************ 00:11:25.671 13:21:14 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:11:25.671 13:21:14 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:25.671 13:21:14 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:25.671 13:21:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:25.671 ************************************ 00:11:25.671 START TEST raid_state_function_test_sb 00:11:25.671 ************************************ 00:11:25.671 13:21:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 4 true 00:11:25.671 13:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:11:25.671 13:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:11:25.671 13:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:11:25.671 13:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:25.671 13:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:25.671 13:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:25.671 13:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:25.671 13:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:25.671 13:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:25.671 13:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:25.671 13:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:25.671 13:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:25.671 13:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:25.671 13:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:25.671 13:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:25.671 13:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:11:25.671 13:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:25.671 13:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:25.671 13:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:25.671 13:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:25.671 13:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:25.671 13:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:25.671 13:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:25.671 13:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:25.671 13:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:11:25.671 13:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:11:25.671 13:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:11:25.671 13:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:11:25.671 13:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=73755 00:11:25.671 13:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:25.671 13:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73755' 00:11:25.671 Process raid pid: 73755 00:11:25.671 13:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 73755 00:11:25.671 13:21:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 73755 ']' 00:11:25.671 13:21:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:25.671 13:21:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:25.671 13:21:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:25.671 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:25.671 13:21:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:25.671 13:21:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.930 [2024-11-17 13:21:14.967823] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:11:25.930 [2024-11-17 13:21:14.968005] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:25.930 [2024-11-17 13:21:15.124754] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:26.188 [2024-11-17 13:21:15.242350] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:26.446 [2024-11-17 13:21:15.464782] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:26.446 [2024-11-17 13:21:15.464818] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:26.704 13:21:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:26.704 13:21:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:11:26.704 13:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:26.704 13:21:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.704 13:21:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.704 [2024-11-17 13:21:15.816664] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:26.704 [2024-11-17 13:21:15.816773] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:26.704 [2024-11-17 13:21:15.816788] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:26.704 [2024-11-17 13:21:15.816810] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:26.704 [2024-11-17 13:21:15.816835] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:26.704 [2024-11-17 13:21:15.816845] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:26.704 [2024-11-17 13:21:15.816852] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:26.704 [2024-11-17 13:21:15.816861] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:26.704 13:21:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.704 13:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:26.704 13:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:26.704 13:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:26.704 13:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:26.704 13:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:26.704 13:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:26.704 13:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:26.704 13:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:26.704 13:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:26.704 13:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:26.704 13:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:26.705 13:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:26.705 13:21:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.705 13:21:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.705 13:21:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.705 13:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:26.705 "name": "Existed_Raid", 00:11:26.705 "uuid": "b592fcee-e426-42e2-8e4c-fb32f40e77ef", 00:11:26.705 "strip_size_kb": 0, 00:11:26.705 "state": "configuring", 00:11:26.705 "raid_level": "raid1", 00:11:26.705 "superblock": true, 00:11:26.705 "num_base_bdevs": 4, 00:11:26.705 "num_base_bdevs_discovered": 0, 00:11:26.705 "num_base_bdevs_operational": 4, 00:11:26.705 "base_bdevs_list": [ 00:11:26.705 { 00:11:26.705 "name": "BaseBdev1", 00:11:26.705 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:26.705 "is_configured": false, 00:11:26.705 "data_offset": 0, 00:11:26.705 "data_size": 0 00:11:26.705 }, 00:11:26.705 { 00:11:26.705 "name": "BaseBdev2", 00:11:26.705 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:26.705 "is_configured": false, 00:11:26.705 "data_offset": 0, 00:11:26.705 "data_size": 0 00:11:26.705 }, 00:11:26.705 { 00:11:26.705 "name": "BaseBdev3", 00:11:26.705 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:26.705 "is_configured": false, 00:11:26.705 "data_offset": 0, 00:11:26.705 "data_size": 0 00:11:26.705 }, 00:11:26.705 { 00:11:26.705 "name": "BaseBdev4", 00:11:26.705 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:26.705 "is_configured": false, 00:11:26.705 "data_offset": 0, 00:11:26.705 "data_size": 0 00:11:26.705 } 00:11:26.705 ] 00:11:26.705 }' 00:11:26.705 13:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:26.705 13:21:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.272 13:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:27.272 13:21:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.272 13:21:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.272 [2024-11-17 13:21:16.263858] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:27.272 [2024-11-17 13:21:16.263942] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:27.272 13:21:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.272 13:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:27.272 13:21:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.272 13:21:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.272 [2024-11-17 13:21:16.275831] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:27.272 [2024-11-17 13:21:16.275909] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:27.272 [2024-11-17 13:21:16.275948] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:27.272 [2024-11-17 13:21:16.276004] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:27.272 [2024-11-17 13:21:16.276041] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:27.272 [2024-11-17 13:21:16.276081] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:27.272 [2024-11-17 13:21:16.276118] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:27.272 [2024-11-17 13:21:16.276156] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:27.272 13:21:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.272 13:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:27.272 13:21:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.272 13:21:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.272 [2024-11-17 13:21:16.323875] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:27.272 BaseBdev1 00:11:27.272 13:21:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.272 13:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:27.272 13:21:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:27.272 13:21:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:27.272 13:21:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:27.272 13:21:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:27.272 13:21:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:27.272 13:21:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:27.272 13:21:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.272 13:21:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.272 13:21:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.272 13:21:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:27.272 13:21:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.272 13:21:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.272 [ 00:11:27.272 { 00:11:27.272 "name": "BaseBdev1", 00:11:27.272 "aliases": [ 00:11:27.272 "e8ebb8e1-1146-4ce5-90ef-0ffe42b37b7d" 00:11:27.272 ], 00:11:27.272 "product_name": "Malloc disk", 00:11:27.272 "block_size": 512, 00:11:27.272 "num_blocks": 65536, 00:11:27.272 "uuid": "e8ebb8e1-1146-4ce5-90ef-0ffe42b37b7d", 00:11:27.272 "assigned_rate_limits": { 00:11:27.272 "rw_ios_per_sec": 0, 00:11:27.272 "rw_mbytes_per_sec": 0, 00:11:27.272 "r_mbytes_per_sec": 0, 00:11:27.272 "w_mbytes_per_sec": 0 00:11:27.272 }, 00:11:27.272 "claimed": true, 00:11:27.272 "claim_type": "exclusive_write", 00:11:27.272 "zoned": false, 00:11:27.272 "supported_io_types": { 00:11:27.272 "read": true, 00:11:27.272 "write": true, 00:11:27.272 "unmap": true, 00:11:27.272 "flush": true, 00:11:27.272 "reset": true, 00:11:27.272 "nvme_admin": false, 00:11:27.272 "nvme_io": false, 00:11:27.272 "nvme_io_md": false, 00:11:27.272 "write_zeroes": true, 00:11:27.272 "zcopy": true, 00:11:27.272 "get_zone_info": false, 00:11:27.272 "zone_management": false, 00:11:27.272 "zone_append": false, 00:11:27.272 "compare": false, 00:11:27.272 "compare_and_write": false, 00:11:27.272 "abort": true, 00:11:27.272 "seek_hole": false, 00:11:27.272 "seek_data": false, 00:11:27.272 "copy": true, 00:11:27.272 "nvme_iov_md": false 00:11:27.272 }, 00:11:27.272 "memory_domains": [ 00:11:27.272 { 00:11:27.272 "dma_device_id": "system", 00:11:27.272 "dma_device_type": 1 00:11:27.272 }, 00:11:27.272 { 00:11:27.272 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:27.272 "dma_device_type": 2 00:11:27.272 } 00:11:27.272 ], 00:11:27.272 "driver_specific": {} 00:11:27.272 } 00:11:27.272 ] 00:11:27.272 13:21:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.272 13:21:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:27.272 13:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:27.272 13:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:27.272 13:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:27.272 13:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:27.272 13:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:27.273 13:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:27.273 13:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:27.273 13:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:27.273 13:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:27.273 13:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:27.273 13:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.273 13:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:27.273 13:21:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.273 13:21:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.273 13:21:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.273 13:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:27.273 "name": "Existed_Raid", 00:11:27.273 "uuid": "2095cda3-2ff0-4419-b0a4-eb290f92b7c2", 00:11:27.273 "strip_size_kb": 0, 00:11:27.273 "state": "configuring", 00:11:27.273 "raid_level": "raid1", 00:11:27.273 "superblock": true, 00:11:27.273 "num_base_bdevs": 4, 00:11:27.273 "num_base_bdevs_discovered": 1, 00:11:27.273 "num_base_bdevs_operational": 4, 00:11:27.273 "base_bdevs_list": [ 00:11:27.273 { 00:11:27.273 "name": "BaseBdev1", 00:11:27.273 "uuid": "e8ebb8e1-1146-4ce5-90ef-0ffe42b37b7d", 00:11:27.273 "is_configured": true, 00:11:27.273 "data_offset": 2048, 00:11:27.273 "data_size": 63488 00:11:27.273 }, 00:11:27.273 { 00:11:27.273 "name": "BaseBdev2", 00:11:27.273 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:27.273 "is_configured": false, 00:11:27.273 "data_offset": 0, 00:11:27.273 "data_size": 0 00:11:27.273 }, 00:11:27.273 { 00:11:27.273 "name": "BaseBdev3", 00:11:27.273 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:27.273 "is_configured": false, 00:11:27.273 "data_offset": 0, 00:11:27.273 "data_size": 0 00:11:27.273 }, 00:11:27.273 { 00:11:27.273 "name": "BaseBdev4", 00:11:27.273 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:27.273 "is_configured": false, 00:11:27.273 "data_offset": 0, 00:11:27.273 "data_size": 0 00:11:27.273 } 00:11:27.273 ] 00:11:27.273 }' 00:11:27.273 13:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:27.273 13:21:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.838 13:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:27.839 13:21:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.839 13:21:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.839 [2024-11-17 13:21:16.795103] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:27.839 [2024-11-17 13:21:16.795150] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:27.839 13:21:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.839 13:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:27.839 13:21:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.839 13:21:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.839 [2024-11-17 13:21:16.807141] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:27.839 [2024-11-17 13:21:16.809128] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:27.839 [2024-11-17 13:21:16.809205] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:27.839 [2024-11-17 13:21:16.809256] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:27.839 [2024-11-17 13:21:16.809301] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:27.839 [2024-11-17 13:21:16.809323] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:27.839 [2024-11-17 13:21:16.809373] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:27.839 13:21:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.839 13:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:27.839 13:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:27.839 13:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:27.839 13:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:27.839 13:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:27.839 13:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:27.839 13:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:27.839 13:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:27.839 13:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:27.839 13:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:27.839 13:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:27.839 13:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:27.839 13:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.839 13:21:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.839 13:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:27.839 13:21:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.839 13:21:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.839 13:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:27.839 "name": "Existed_Raid", 00:11:27.839 "uuid": "349efab0-d06c-4b16-8f8f-cada2d68e858", 00:11:27.839 "strip_size_kb": 0, 00:11:27.839 "state": "configuring", 00:11:27.839 "raid_level": "raid1", 00:11:27.839 "superblock": true, 00:11:27.839 "num_base_bdevs": 4, 00:11:27.839 "num_base_bdevs_discovered": 1, 00:11:27.839 "num_base_bdevs_operational": 4, 00:11:27.839 "base_bdevs_list": [ 00:11:27.839 { 00:11:27.839 "name": "BaseBdev1", 00:11:27.839 "uuid": "e8ebb8e1-1146-4ce5-90ef-0ffe42b37b7d", 00:11:27.839 "is_configured": true, 00:11:27.839 "data_offset": 2048, 00:11:27.839 "data_size": 63488 00:11:27.839 }, 00:11:27.839 { 00:11:27.839 "name": "BaseBdev2", 00:11:27.839 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:27.839 "is_configured": false, 00:11:27.839 "data_offset": 0, 00:11:27.839 "data_size": 0 00:11:27.839 }, 00:11:27.839 { 00:11:27.839 "name": "BaseBdev3", 00:11:27.839 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:27.839 "is_configured": false, 00:11:27.839 "data_offset": 0, 00:11:27.839 "data_size": 0 00:11:27.839 }, 00:11:27.839 { 00:11:27.839 "name": "BaseBdev4", 00:11:27.839 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:27.839 "is_configured": false, 00:11:27.839 "data_offset": 0, 00:11:27.839 "data_size": 0 00:11:27.839 } 00:11:27.839 ] 00:11:27.839 }' 00:11:27.839 13:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:27.839 13:21:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.097 13:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:28.097 13:21:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.097 13:21:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.097 [2024-11-17 13:21:17.314633] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:28.097 BaseBdev2 00:11:28.097 13:21:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.097 13:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:28.097 13:21:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:28.097 13:21:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:28.097 13:21:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:28.097 13:21:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:28.097 13:21:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:28.097 13:21:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:28.097 13:21:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.097 13:21:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.355 13:21:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.355 13:21:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:28.355 13:21:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.355 13:21:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.355 [ 00:11:28.355 { 00:11:28.355 "name": "BaseBdev2", 00:11:28.355 "aliases": [ 00:11:28.355 "5d02b1cd-3e08-46ba-b237-3533803aebd4" 00:11:28.355 ], 00:11:28.355 "product_name": "Malloc disk", 00:11:28.355 "block_size": 512, 00:11:28.355 "num_blocks": 65536, 00:11:28.355 "uuid": "5d02b1cd-3e08-46ba-b237-3533803aebd4", 00:11:28.356 "assigned_rate_limits": { 00:11:28.356 "rw_ios_per_sec": 0, 00:11:28.356 "rw_mbytes_per_sec": 0, 00:11:28.356 "r_mbytes_per_sec": 0, 00:11:28.356 "w_mbytes_per_sec": 0 00:11:28.356 }, 00:11:28.356 "claimed": true, 00:11:28.356 "claim_type": "exclusive_write", 00:11:28.356 "zoned": false, 00:11:28.356 "supported_io_types": { 00:11:28.356 "read": true, 00:11:28.356 "write": true, 00:11:28.356 "unmap": true, 00:11:28.356 "flush": true, 00:11:28.356 "reset": true, 00:11:28.356 "nvme_admin": false, 00:11:28.356 "nvme_io": false, 00:11:28.356 "nvme_io_md": false, 00:11:28.356 "write_zeroes": true, 00:11:28.356 "zcopy": true, 00:11:28.356 "get_zone_info": false, 00:11:28.356 "zone_management": false, 00:11:28.356 "zone_append": false, 00:11:28.356 "compare": false, 00:11:28.356 "compare_and_write": false, 00:11:28.356 "abort": true, 00:11:28.356 "seek_hole": false, 00:11:28.356 "seek_data": false, 00:11:28.356 "copy": true, 00:11:28.356 "nvme_iov_md": false 00:11:28.356 }, 00:11:28.356 "memory_domains": [ 00:11:28.356 { 00:11:28.356 "dma_device_id": "system", 00:11:28.356 "dma_device_type": 1 00:11:28.356 }, 00:11:28.356 { 00:11:28.356 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:28.356 "dma_device_type": 2 00:11:28.356 } 00:11:28.356 ], 00:11:28.356 "driver_specific": {} 00:11:28.356 } 00:11:28.356 ] 00:11:28.356 13:21:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.356 13:21:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:28.356 13:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:28.356 13:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:28.356 13:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:28.356 13:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:28.356 13:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:28.356 13:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:28.356 13:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:28.356 13:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:28.356 13:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:28.356 13:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:28.356 13:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:28.356 13:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:28.356 13:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.356 13:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:28.356 13:21:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.356 13:21:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.356 13:21:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.356 13:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:28.356 "name": "Existed_Raid", 00:11:28.356 "uuid": "349efab0-d06c-4b16-8f8f-cada2d68e858", 00:11:28.356 "strip_size_kb": 0, 00:11:28.356 "state": "configuring", 00:11:28.356 "raid_level": "raid1", 00:11:28.356 "superblock": true, 00:11:28.356 "num_base_bdevs": 4, 00:11:28.356 "num_base_bdevs_discovered": 2, 00:11:28.356 "num_base_bdevs_operational": 4, 00:11:28.356 "base_bdevs_list": [ 00:11:28.356 { 00:11:28.356 "name": "BaseBdev1", 00:11:28.356 "uuid": "e8ebb8e1-1146-4ce5-90ef-0ffe42b37b7d", 00:11:28.356 "is_configured": true, 00:11:28.356 "data_offset": 2048, 00:11:28.356 "data_size": 63488 00:11:28.356 }, 00:11:28.356 { 00:11:28.356 "name": "BaseBdev2", 00:11:28.356 "uuid": "5d02b1cd-3e08-46ba-b237-3533803aebd4", 00:11:28.356 "is_configured": true, 00:11:28.356 "data_offset": 2048, 00:11:28.356 "data_size": 63488 00:11:28.356 }, 00:11:28.356 { 00:11:28.356 "name": "BaseBdev3", 00:11:28.356 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:28.356 "is_configured": false, 00:11:28.356 "data_offset": 0, 00:11:28.356 "data_size": 0 00:11:28.356 }, 00:11:28.356 { 00:11:28.356 "name": "BaseBdev4", 00:11:28.356 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:28.356 "is_configured": false, 00:11:28.356 "data_offset": 0, 00:11:28.356 "data_size": 0 00:11:28.356 } 00:11:28.356 ] 00:11:28.356 }' 00:11:28.356 13:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:28.356 13:21:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.614 13:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:28.614 13:21:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.614 13:21:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.873 [2024-11-17 13:21:17.873635] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:28.873 BaseBdev3 00:11:28.873 13:21:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.873 13:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:28.873 13:21:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:28.873 13:21:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:28.873 13:21:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:28.873 13:21:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:28.873 13:21:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:28.873 13:21:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:28.873 13:21:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.873 13:21:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.873 13:21:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.873 13:21:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:28.873 13:21:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.873 13:21:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.873 [ 00:11:28.873 { 00:11:28.873 "name": "BaseBdev3", 00:11:28.873 "aliases": [ 00:11:28.873 "af27ef9b-f323-4692-a271-d306eafd5ada" 00:11:28.873 ], 00:11:28.873 "product_name": "Malloc disk", 00:11:28.873 "block_size": 512, 00:11:28.873 "num_blocks": 65536, 00:11:28.873 "uuid": "af27ef9b-f323-4692-a271-d306eafd5ada", 00:11:28.873 "assigned_rate_limits": { 00:11:28.873 "rw_ios_per_sec": 0, 00:11:28.873 "rw_mbytes_per_sec": 0, 00:11:28.873 "r_mbytes_per_sec": 0, 00:11:28.873 "w_mbytes_per_sec": 0 00:11:28.873 }, 00:11:28.873 "claimed": true, 00:11:28.873 "claim_type": "exclusive_write", 00:11:28.873 "zoned": false, 00:11:28.873 "supported_io_types": { 00:11:28.873 "read": true, 00:11:28.873 "write": true, 00:11:28.873 "unmap": true, 00:11:28.873 "flush": true, 00:11:28.873 "reset": true, 00:11:28.873 "nvme_admin": false, 00:11:28.873 "nvme_io": false, 00:11:28.873 "nvme_io_md": false, 00:11:28.873 "write_zeroes": true, 00:11:28.873 "zcopy": true, 00:11:28.873 "get_zone_info": false, 00:11:28.873 "zone_management": false, 00:11:28.873 "zone_append": false, 00:11:28.873 "compare": false, 00:11:28.873 "compare_and_write": false, 00:11:28.873 "abort": true, 00:11:28.873 "seek_hole": false, 00:11:28.873 "seek_data": false, 00:11:28.873 "copy": true, 00:11:28.873 "nvme_iov_md": false 00:11:28.873 }, 00:11:28.873 "memory_domains": [ 00:11:28.873 { 00:11:28.873 "dma_device_id": "system", 00:11:28.873 "dma_device_type": 1 00:11:28.873 }, 00:11:28.873 { 00:11:28.873 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:28.873 "dma_device_type": 2 00:11:28.873 } 00:11:28.873 ], 00:11:28.873 "driver_specific": {} 00:11:28.873 } 00:11:28.873 ] 00:11:28.873 13:21:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.873 13:21:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:28.873 13:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:28.873 13:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:28.873 13:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:28.873 13:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:28.873 13:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:28.873 13:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:28.873 13:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:28.873 13:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:28.873 13:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:28.873 13:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:28.873 13:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:28.873 13:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:28.873 13:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.873 13:21:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.873 13:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:28.873 13:21:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.873 13:21:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.873 13:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:28.873 "name": "Existed_Raid", 00:11:28.873 "uuid": "349efab0-d06c-4b16-8f8f-cada2d68e858", 00:11:28.873 "strip_size_kb": 0, 00:11:28.873 "state": "configuring", 00:11:28.873 "raid_level": "raid1", 00:11:28.873 "superblock": true, 00:11:28.873 "num_base_bdevs": 4, 00:11:28.873 "num_base_bdevs_discovered": 3, 00:11:28.873 "num_base_bdevs_operational": 4, 00:11:28.873 "base_bdevs_list": [ 00:11:28.873 { 00:11:28.873 "name": "BaseBdev1", 00:11:28.873 "uuid": "e8ebb8e1-1146-4ce5-90ef-0ffe42b37b7d", 00:11:28.873 "is_configured": true, 00:11:28.873 "data_offset": 2048, 00:11:28.873 "data_size": 63488 00:11:28.873 }, 00:11:28.873 { 00:11:28.873 "name": "BaseBdev2", 00:11:28.873 "uuid": "5d02b1cd-3e08-46ba-b237-3533803aebd4", 00:11:28.873 "is_configured": true, 00:11:28.873 "data_offset": 2048, 00:11:28.873 "data_size": 63488 00:11:28.873 }, 00:11:28.873 { 00:11:28.873 "name": "BaseBdev3", 00:11:28.873 "uuid": "af27ef9b-f323-4692-a271-d306eafd5ada", 00:11:28.873 "is_configured": true, 00:11:28.873 "data_offset": 2048, 00:11:28.873 "data_size": 63488 00:11:28.873 }, 00:11:28.873 { 00:11:28.873 "name": "BaseBdev4", 00:11:28.873 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:28.873 "is_configured": false, 00:11:28.873 "data_offset": 0, 00:11:28.873 "data_size": 0 00:11:28.873 } 00:11:28.873 ] 00:11:28.873 }' 00:11:28.873 13:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:28.873 13:21:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.132 13:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:29.132 13:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.393 13:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.393 [2024-11-17 13:21:18.397165] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:29.393 [2024-11-17 13:21:18.397478] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:29.393 [2024-11-17 13:21:18.397495] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:29.393 [2024-11-17 13:21:18.397786] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:29.393 BaseBdev4 00:11:29.393 [2024-11-17 13:21:18.398024] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:29.393 [2024-11-17 13:21:18.398080] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:29.393 [2024-11-17 13:21:18.398306] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:29.393 13:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.393 13:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:11:29.393 13:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:29.393 13:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:29.393 13:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:29.393 13:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:29.393 13:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:29.393 13:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:29.393 13:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.393 13:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.393 13:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.393 13:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:29.393 13:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.393 13:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.393 [ 00:11:29.393 { 00:11:29.393 "name": "BaseBdev4", 00:11:29.393 "aliases": [ 00:11:29.393 "decf0964-c9c0-4c65-ab63-5e685e5f08ad" 00:11:29.393 ], 00:11:29.393 "product_name": "Malloc disk", 00:11:29.393 "block_size": 512, 00:11:29.393 "num_blocks": 65536, 00:11:29.393 "uuid": "decf0964-c9c0-4c65-ab63-5e685e5f08ad", 00:11:29.393 "assigned_rate_limits": { 00:11:29.393 "rw_ios_per_sec": 0, 00:11:29.393 "rw_mbytes_per_sec": 0, 00:11:29.393 "r_mbytes_per_sec": 0, 00:11:29.393 "w_mbytes_per_sec": 0 00:11:29.393 }, 00:11:29.393 "claimed": true, 00:11:29.393 "claim_type": "exclusive_write", 00:11:29.393 "zoned": false, 00:11:29.393 "supported_io_types": { 00:11:29.393 "read": true, 00:11:29.393 "write": true, 00:11:29.393 "unmap": true, 00:11:29.393 "flush": true, 00:11:29.393 "reset": true, 00:11:29.393 "nvme_admin": false, 00:11:29.393 "nvme_io": false, 00:11:29.393 "nvme_io_md": false, 00:11:29.393 "write_zeroes": true, 00:11:29.393 "zcopy": true, 00:11:29.393 "get_zone_info": false, 00:11:29.393 "zone_management": false, 00:11:29.393 "zone_append": false, 00:11:29.393 "compare": false, 00:11:29.393 "compare_and_write": false, 00:11:29.393 "abort": true, 00:11:29.393 "seek_hole": false, 00:11:29.393 "seek_data": false, 00:11:29.393 "copy": true, 00:11:29.393 "nvme_iov_md": false 00:11:29.393 }, 00:11:29.393 "memory_domains": [ 00:11:29.393 { 00:11:29.393 "dma_device_id": "system", 00:11:29.393 "dma_device_type": 1 00:11:29.393 }, 00:11:29.393 { 00:11:29.393 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:29.393 "dma_device_type": 2 00:11:29.393 } 00:11:29.393 ], 00:11:29.393 "driver_specific": {} 00:11:29.393 } 00:11:29.393 ] 00:11:29.393 13:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.393 13:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:29.393 13:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:29.393 13:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:29.393 13:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:11:29.393 13:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:29.393 13:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:29.393 13:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:29.393 13:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:29.393 13:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:29.393 13:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:29.393 13:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:29.393 13:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:29.393 13:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:29.393 13:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:29.393 13:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:29.393 13:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.393 13:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.393 13:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.393 13:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:29.393 "name": "Existed_Raid", 00:11:29.393 "uuid": "349efab0-d06c-4b16-8f8f-cada2d68e858", 00:11:29.393 "strip_size_kb": 0, 00:11:29.393 "state": "online", 00:11:29.393 "raid_level": "raid1", 00:11:29.393 "superblock": true, 00:11:29.393 "num_base_bdevs": 4, 00:11:29.393 "num_base_bdevs_discovered": 4, 00:11:29.393 "num_base_bdevs_operational": 4, 00:11:29.393 "base_bdevs_list": [ 00:11:29.393 { 00:11:29.393 "name": "BaseBdev1", 00:11:29.393 "uuid": "e8ebb8e1-1146-4ce5-90ef-0ffe42b37b7d", 00:11:29.393 "is_configured": true, 00:11:29.393 "data_offset": 2048, 00:11:29.393 "data_size": 63488 00:11:29.393 }, 00:11:29.393 { 00:11:29.393 "name": "BaseBdev2", 00:11:29.393 "uuid": "5d02b1cd-3e08-46ba-b237-3533803aebd4", 00:11:29.393 "is_configured": true, 00:11:29.393 "data_offset": 2048, 00:11:29.393 "data_size": 63488 00:11:29.393 }, 00:11:29.393 { 00:11:29.394 "name": "BaseBdev3", 00:11:29.394 "uuid": "af27ef9b-f323-4692-a271-d306eafd5ada", 00:11:29.394 "is_configured": true, 00:11:29.394 "data_offset": 2048, 00:11:29.394 "data_size": 63488 00:11:29.394 }, 00:11:29.394 { 00:11:29.394 "name": "BaseBdev4", 00:11:29.394 "uuid": "decf0964-c9c0-4c65-ab63-5e685e5f08ad", 00:11:29.394 "is_configured": true, 00:11:29.394 "data_offset": 2048, 00:11:29.394 "data_size": 63488 00:11:29.394 } 00:11:29.394 ] 00:11:29.394 }' 00:11:29.394 13:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:29.394 13:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.964 13:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:29.964 13:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:29.964 13:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:29.964 13:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:29.964 13:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:29.964 13:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:29.964 13:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:29.964 13:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.964 13:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.964 13:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:29.964 [2024-11-17 13:21:18.952618] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:29.964 13:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.964 13:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:29.964 "name": "Existed_Raid", 00:11:29.964 "aliases": [ 00:11:29.964 "349efab0-d06c-4b16-8f8f-cada2d68e858" 00:11:29.964 ], 00:11:29.964 "product_name": "Raid Volume", 00:11:29.964 "block_size": 512, 00:11:29.964 "num_blocks": 63488, 00:11:29.964 "uuid": "349efab0-d06c-4b16-8f8f-cada2d68e858", 00:11:29.964 "assigned_rate_limits": { 00:11:29.964 "rw_ios_per_sec": 0, 00:11:29.964 "rw_mbytes_per_sec": 0, 00:11:29.964 "r_mbytes_per_sec": 0, 00:11:29.964 "w_mbytes_per_sec": 0 00:11:29.964 }, 00:11:29.964 "claimed": false, 00:11:29.964 "zoned": false, 00:11:29.964 "supported_io_types": { 00:11:29.964 "read": true, 00:11:29.964 "write": true, 00:11:29.964 "unmap": false, 00:11:29.964 "flush": false, 00:11:29.964 "reset": true, 00:11:29.964 "nvme_admin": false, 00:11:29.964 "nvme_io": false, 00:11:29.964 "nvme_io_md": false, 00:11:29.964 "write_zeroes": true, 00:11:29.964 "zcopy": false, 00:11:29.964 "get_zone_info": false, 00:11:29.964 "zone_management": false, 00:11:29.964 "zone_append": false, 00:11:29.964 "compare": false, 00:11:29.964 "compare_and_write": false, 00:11:29.964 "abort": false, 00:11:29.964 "seek_hole": false, 00:11:29.964 "seek_data": false, 00:11:29.964 "copy": false, 00:11:29.964 "nvme_iov_md": false 00:11:29.964 }, 00:11:29.964 "memory_domains": [ 00:11:29.964 { 00:11:29.964 "dma_device_id": "system", 00:11:29.964 "dma_device_type": 1 00:11:29.964 }, 00:11:29.964 { 00:11:29.964 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:29.964 "dma_device_type": 2 00:11:29.964 }, 00:11:29.964 { 00:11:29.964 "dma_device_id": "system", 00:11:29.964 "dma_device_type": 1 00:11:29.964 }, 00:11:29.964 { 00:11:29.964 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:29.964 "dma_device_type": 2 00:11:29.964 }, 00:11:29.964 { 00:11:29.964 "dma_device_id": "system", 00:11:29.964 "dma_device_type": 1 00:11:29.964 }, 00:11:29.964 { 00:11:29.964 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:29.964 "dma_device_type": 2 00:11:29.964 }, 00:11:29.964 { 00:11:29.964 "dma_device_id": "system", 00:11:29.964 "dma_device_type": 1 00:11:29.964 }, 00:11:29.964 { 00:11:29.964 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:29.964 "dma_device_type": 2 00:11:29.964 } 00:11:29.964 ], 00:11:29.964 "driver_specific": { 00:11:29.964 "raid": { 00:11:29.964 "uuid": "349efab0-d06c-4b16-8f8f-cada2d68e858", 00:11:29.964 "strip_size_kb": 0, 00:11:29.964 "state": "online", 00:11:29.964 "raid_level": "raid1", 00:11:29.964 "superblock": true, 00:11:29.964 "num_base_bdevs": 4, 00:11:29.964 "num_base_bdevs_discovered": 4, 00:11:29.964 "num_base_bdevs_operational": 4, 00:11:29.964 "base_bdevs_list": [ 00:11:29.964 { 00:11:29.964 "name": "BaseBdev1", 00:11:29.964 "uuid": "e8ebb8e1-1146-4ce5-90ef-0ffe42b37b7d", 00:11:29.964 "is_configured": true, 00:11:29.964 "data_offset": 2048, 00:11:29.964 "data_size": 63488 00:11:29.964 }, 00:11:29.964 { 00:11:29.964 "name": "BaseBdev2", 00:11:29.964 "uuid": "5d02b1cd-3e08-46ba-b237-3533803aebd4", 00:11:29.964 "is_configured": true, 00:11:29.964 "data_offset": 2048, 00:11:29.964 "data_size": 63488 00:11:29.964 }, 00:11:29.964 { 00:11:29.964 "name": "BaseBdev3", 00:11:29.964 "uuid": "af27ef9b-f323-4692-a271-d306eafd5ada", 00:11:29.964 "is_configured": true, 00:11:29.964 "data_offset": 2048, 00:11:29.964 "data_size": 63488 00:11:29.964 }, 00:11:29.964 { 00:11:29.964 "name": "BaseBdev4", 00:11:29.964 "uuid": "decf0964-c9c0-4c65-ab63-5e685e5f08ad", 00:11:29.964 "is_configured": true, 00:11:29.964 "data_offset": 2048, 00:11:29.964 "data_size": 63488 00:11:29.964 } 00:11:29.964 ] 00:11:29.964 } 00:11:29.964 } 00:11:29.964 }' 00:11:29.964 13:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:29.964 13:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:29.964 BaseBdev2 00:11:29.964 BaseBdev3 00:11:29.964 BaseBdev4' 00:11:29.964 13:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:29.964 13:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:29.964 13:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:29.964 13:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:29.964 13:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:29.964 13:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.964 13:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.965 13:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.965 13:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:29.965 13:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:29.965 13:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:29.965 13:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:29.965 13:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:29.965 13:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.965 13:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.965 13:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.965 13:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:29.965 13:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:29.965 13:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:29.965 13:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:29.965 13:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.965 13:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.965 13:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:29.965 13:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.965 13:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:29.965 13:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:29.965 13:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:29.965 13:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:29.965 13:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:29.965 13:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.965 13:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.224 13:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.224 13:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:30.224 13:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:30.224 13:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:30.224 13:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.224 13:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.224 [2024-11-17 13:21:19.235817] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:30.224 13:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.224 13:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:30.224 13:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:11:30.224 13:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:30.224 13:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:11:30.224 13:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:11:30.224 13:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:11:30.224 13:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:30.224 13:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:30.224 13:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:30.224 13:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:30.224 13:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:30.224 13:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:30.224 13:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:30.224 13:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:30.224 13:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:30.224 13:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:30.224 13:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.224 13:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.224 13:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:30.224 13:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.224 13:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:30.224 "name": "Existed_Raid", 00:11:30.224 "uuid": "349efab0-d06c-4b16-8f8f-cada2d68e858", 00:11:30.224 "strip_size_kb": 0, 00:11:30.224 "state": "online", 00:11:30.224 "raid_level": "raid1", 00:11:30.224 "superblock": true, 00:11:30.224 "num_base_bdevs": 4, 00:11:30.224 "num_base_bdevs_discovered": 3, 00:11:30.224 "num_base_bdevs_operational": 3, 00:11:30.224 "base_bdevs_list": [ 00:11:30.224 { 00:11:30.224 "name": null, 00:11:30.224 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:30.224 "is_configured": false, 00:11:30.224 "data_offset": 0, 00:11:30.224 "data_size": 63488 00:11:30.224 }, 00:11:30.224 { 00:11:30.224 "name": "BaseBdev2", 00:11:30.224 "uuid": "5d02b1cd-3e08-46ba-b237-3533803aebd4", 00:11:30.224 "is_configured": true, 00:11:30.224 "data_offset": 2048, 00:11:30.224 "data_size": 63488 00:11:30.224 }, 00:11:30.224 { 00:11:30.224 "name": "BaseBdev3", 00:11:30.224 "uuid": "af27ef9b-f323-4692-a271-d306eafd5ada", 00:11:30.224 "is_configured": true, 00:11:30.224 "data_offset": 2048, 00:11:30.224 "data_size": 63488 00:11:30.224 }, 00:11:30.224 { 00:11:30.224 "name": "BaseBdev4", 00:11:30.224 "uuid": "decf0964-c9c0-4c65-ab63-5e685e5f08ad", 00:11:30.224 "is_configured": true, 00:11:30.224 "data_offset": 2048, 00:11:30.224 "data_size": 63488 00:11:30.224 } 00:11:30.224 ] 00:11:30.224 }' 00:11:30.224 13:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:30.224 13:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.792 13:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:30.792 13:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:30.792 13:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:30.792 13:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:30.792 13:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.792 13:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.793 13:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.793 13:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:30.793 13:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:30.793 13:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:30.793 13:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.793 13:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.793 [2024-11-17 13:21:19.806998] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:30.793 13:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.793 13:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:30.793 13:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:30.793 13:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:30.793 13:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:30.793 13:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.793 13:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.793 13:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.793 13:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:30.793 13:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:30.793 13:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:30.793 13:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.793 13:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.793 [2024-11-17 13:21:19.960751] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:31.052 13:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.052 13:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:31.052 13:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:31.052 13:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:31.052 13:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.052 13:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:31.052 13:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.052 13:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.052 13:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:31.052 13:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:31.052 13:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:11:31.052 13:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.052 13:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.052 [2024-11-17 13:21:20.113258] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:11:31.052 [2024-11-17 13:21:20.113474] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:31.052 [2024-11-17 13:21:20.206141] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:31.052 [2024-11-17 13:21:20.206233] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:31.052 [2024-11-17 13:21:20.206263] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:31.052 13:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.052 13:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:31.052 13:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:31.052 13:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:31.052 13:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:31.052 13:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.052 13:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.052 13:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.052 13:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:31.052 13:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:31.052 13:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:11:31.052 13:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:31.052 13:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:31.052 13:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:31.052 13:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.052 13:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.313 BaseBdev2 00:11:31.313 13:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.313 13:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:31.313 13:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:31.313 13:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:31.313 13:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:31.313 13:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:31.313 13:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:31.313 13:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:31.313 13:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.313 13:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.313 13:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.313 13:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:31.313 13:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.313 13:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.313 [ 00:11:31.313 { 00:11:31.313 "name": "BaseBdev2", 00:11:31.313 "aliases": [ 00:11:31.313 "af2426ee-b283-4a60-b07a-b0e83abb7e49" 00:11:31.314 ], 00:11:31.314 "product_name": "Malloc disk", 00:11:31.314 "block_size": 512, 00:11:31.314 "num_blocks": 65536, 00:11:31.314 "uuid": "af2426ee-b283-4a60-b07a-b0e83abb7e49", 00:11:31.314 "assigned_rate_limits": { 00:11:31.314 "rw_ios_per_sec": 0, 00:11:31.314 "rw_mbytes_per_sec": 0, 00:11:31.314 "r_mbytes_per_sec": 0, 00:11:31.314 "w_mbytes_per_sec": 0 00:11:31.314 }, 00:11:31.314 "claimed": false, 00:11:31.314 "zoned": false, 00:11:31.314 "supported_io_types": { 00:11:31.314 "read": true, 00:11:31.314 "write": true, 00:11:31.314 "unmap": true, 00:11:31.314 "flush": true, 00:11:31.314 "reset": true, 00:11:31.314 "nvme_admin": false, 00:11:31.314 "nvme_io": false, 00:11:31.314 "nvme_io_md": false, 00:11:31.314 "write_zeroes": true, 00:11:31.314 "zcopy": true, 00:11:31.314 "get_zone_info": false, 00:11:31.314 "zone_management": false, 00:11:31.314 "zone_append": false, 00:11:31.314 "compare": false, 00:11:31.314 "compare_and_write": false, 00:11:31.314 "abort": true, 00:11:31.314 "seek_hole": false, 00:11:31.314 "seek_data": false, 00:11:31.314 "copy": true, 00:11:31.314 "nvme_iov_md": false 00:11:31.314 }, 00:11:31.314 "memory_domains": [ 00:11:31.314 { 00:11:31.314 "dma_device_id": "system", 00:11:31.314 "dma_device_type": 1 00:11:31.314 }, 00:11:31.314 { 00:11:31.314 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:31.314 "dma_device_type": 2 00:11:31.314 } 00:11:31.314 ], 00:11:31.314 "driver_specific": {} 00:11:31.314 } 00:11:31.314 ] 00:11:31.314 13:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.314 13:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:31.314 13:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:31.314 13:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:31.314 13:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:31.314 13:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.314 13:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.314 BaseBdev3 00:11:31.314 13:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.314 13:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:31.314 13:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:31.314 13:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:31.314 13:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:31.314 13:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:31.314 13:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:31.314 13:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:31.314 13:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.314 13:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.314 13:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.314 13:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:31.314 13:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.314 13:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.314 [ 00:11:31.314 { 00:11:31.314 "name": "BaseBdev3", 00:11:31.314 "aliases": [ 00:11:31.314 "a6bb11e7-3faa-461e-af36-8216d45610f7" 00:11:31.314 ], 00:11:31.314 "product_name": "Malloc disk", 00:11:31.314 "block_size": 512, 00:11:31.314 "num_blocks": 65536, 00:11:31.314 "uuid": "a6bb11e7-3faa-461e-af36-8216d45610f7", 00:11:31.314 "assigned_rate_limits": { 00:11:31.314 "rw_ios_per_sec": 0, 00:11:31.314 "rw_mbytes_per_sec": 0, 00:11:31.314 "r_mbytes_per_sec": 0, 00:11:31.314 "w_mbytes_per_sec": 0 00:11:31.314 }, 00:11:31.314 "claimed": false, 00:11:31.314 "zoned": false, 00:11:31.314 "supported_io_types": { 00:11:31.314 "read": true, 00:11:31.314 "write": true, 00:11:31.314 "unmap": true, 00:11:31.314 "flush": true, 00:11:31.314 "reset": true, 00:11:31.314 "nvme_admin": false, 00:11:31.314 "nvme_io": false, 00:11:31.314 "nvme_io_md": false, 00:11:31.314 "write_zeroes": true, 00:11:31.314 "zcopy": true, 00:11:31.314 "get_zone_info": false, 00:11:31.314 "zone_management": false, 00:11:31.314 "zone_append": false, 00:11:31.314 "compare": false, 00:11:31.314 "compare_and_write": false, 00:11:31.314 "abort": true, 00:11:31.314 "seek_hole": false, 00:11:31.314 "seek_data": false, 00:11:31.314 "copy": true, 00:11:31.314 "nvme_iov_md": false 00:11:31.314 }, 00:11:31.314 "memory_domains": [ 00:11:31.314 { 00:11:31.314 "dma_device_id": "system", 00:11:31.314 "dma_device_type": 1 00:11:31.314 }, 00:11:31.314 { 00:11:31.314 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:31.314 "dma_device_type": 2 00:11:31.314 } 00:11:31.314 ], 00:11:31.314 "driver_specific": {} 00:11:31.314 } 00:11:31.314 ] 00:11:31.314 13:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.314 13:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:31.314 13:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:31.314 13:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:31.314 13:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:31.314 13:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.314 13:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.314 BaseBdev4 00:11:31.314 13:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.314 13:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:11:31.314 13:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:31.314 13:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:31.314 13:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:31.314 13:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:31.314 13:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:31.314 13:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:31.314 13:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.314 13:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.314 13:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.314 13:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:31.314 13:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.314 13:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.314 [ 00:11:31.314 { 00:11:31.314 "name": "BaseBdev4", 00:11:31.314 "aliases": [ 00:11:31.314 "c8de8e3f-ae1c-4a07-8e45-4e340d5035ed" 00:11:31.314 ], 00:11:31.314 "product_name": "Malloc disk", 00:11:31.314 "block_size": 512, 00:11:31.314 "num_blocks": 65536, 00:11:31.314 "uuid": "c8de8e3f-ae1c-4a07-8e45-4e340d5035ed", 00:11:31.314 "assigned_rate_limits": { 00:11:31.314 "rw_ios_per_sec": 0, 00:11:31.314 "rw_mbytes_per_sec": 0, 00:11:31.314 "r_mbytes_per_sec": 0, 00:11:31.314 "w_mbytes_per_sec": 0 00:11:31.314 }, 00:11:31.314 "claimed": false, 00:11:31.314 "zoned": false, 00:11:31.314 "supported_io_types": { 00:11:31.314 "read": true, 00:11:31.314 "write": true, 00:11:31.314 "unmap": true, 00:11:31.314 "flush": true, 00:11:31.314 "reset": true, 00:11:31.314 "nvme_admin": false, 00:11:31.314 "nvme_io": false, 00:11:31.314 "nvme_io_md": false, 00:11:31.314 "write_zeroes": true, 00:11:31.314 "zcopy": true, 00:11:31.314 "get_zone_info": false, 00:11:31.314 "zone_management": false, 00:11:31.314 "zone_append": false, 00:11:31.314 "compare": false, 00:11:31.314 "compare_and_write": false, 00:11:31.314 "abort": true, 00:11:31.314 "seek_hole": false, 00:11:31.314 "seek_data": false, 00:11:31.314 "copy": true, 00:11:31.314 "nvme_iov_md": false 00:11:31.314 }, 00:11:31.314 "memory_domains": [ 00:11:31.314 { 00:11:31.314 "dma_device_id": "system", 00:11:31.314 "dma_device_type": 1 00:11:31.314 }, 00:11:31.314 { 00:11:31.314 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:31.314 "dma_device_type": 2 00:11:31.314 } 00:11:31.314 ], 00:11:31.314 "driver_specific": {} 00:11:31.314 } 00:11:31.314 ] 00:11:31.314 13:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.314 13:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:31.314 13:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:31.314 13:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:31.314 13:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:31.315 13:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.315 13:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.315 [2024-11-17 13:21:20.514295] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:31.315 [2024-11-17 13:21:20.514396] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:31.315 [2024-11-17 13:21:20.514474] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:31.315 [2024-11-17 13:21:20.516398] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:31.315 [2024-11-17 13:21:20.516495] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:31.315 13:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.315 13:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:31.315 13:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:31.315 13:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:31.315 13:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:31.315 13:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:31.315 13:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:31.315 13:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:31.315 13:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:31.315 13:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:31.315 13:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:31.315 13:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:31.315 13:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:31.315 13:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.315 13:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.575 13:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.575 13:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:31.575 "name": "Existed_Raid", 00:11:31.575 "uuid": "415bdcb4-5179-4881-a589-df5683c9a9ee", 00:11:31.575 "strip_size_kb": 0, 00:11:31.575 "state": "configuring", 00:11:31.575 "raid_level": "raid1", 00:11:31.575 "superblock": true, 00:11:31.575 "num_base_bdevs": 4, 00:11:31.575 "num_base_bdevs_discovered": 3, 00:11:31.575 "num_base_bdevs_operational": 4, 00:11:31.575 "base_bdevs_list": [ 00:11:31.575 { 00:11:31.575 "name": "BaseBdev1", 00:11:31.575 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:31.575 "is_configured": false, 00:11:31.575 "data_offset": 0, 00:11:31.575 "data_size": 0 00:11:31.575 }, 00:11:31.575 { 00:11:31.575 "name": "BaseBdev2", 00:11:31.575 "uuid": "af2426ee-b283-4a60-b07a-b0e83abb7e49", 00:11:31.575 "is_configured": true, 00:11:31.575 "data_offset": 2048, 00:11:31.575 "data_size": 63488 00:11:31.575 }, 00:11:31.575 { 00:11:31.575 "name": "BaseBdev3", 00:11:31.575 "uuid": "a6bb11e7-3faa-461e-af36-8216d45610f7", 00:11:31.575 "is_configured": true, 00:11:31.575 "data_offset": 2048, 00:11:31.575 "data_size": 63488 00:11:31.575 }, 00:11:31.575 { 00:11:31.575 "name": "BaseBdev4", 00:11:31.575 "uuid": "c8de8e3f-ae1c-4a07-8e45-4e340d5035ed", 00:11:31.575 "is_configured": true, 00:11:31.575 "data_offset": 2048, 00:11:31.575 "data_size": 63488 00:11:31.575 } 00:11:31.575 ] 00:11:31.575 }' 00:11:31.576 13:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:31.576 13:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.836 13:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:31.836 13:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.836 13:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.836 [2024-11-17 13:21:20.949546] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:31.836 13:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.836 13:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:31.836 13:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:31.836 13:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:31.836 13:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:31.836 13:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:31.836 13:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:31.836 13:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:31.836 13:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:31.836 13:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:31.836 13:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:31.836 13:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:31.836 13:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:31.836 13:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.836 13:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.836 13:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.836 13:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:31.836 "name": "Existed_Raid", 00:11:31.836 "uuid": "415bdcb4-5179-4881-a589-df5683c9a9ee", 00:11:31.836 "strip_size_kb": 0, 00:11:31.836 "state": "configuring", 00:11:31.836 "raid_level": "raid1", 00:11:31.836 "superblock": true, 00:11:31.836 "num_base_bdevs": 4, 00:11:31.836 "num_base_bdevs_discovered": 2, 00:11:31.836 "num_base_bdevs_operational": 4, 00:11:31.836 "base_bdevs_list": [ 00:11:31.836 { 00:11:31.836 "name": "BaseBdev1", 00:11:31.836 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:31.836 "is_configured": false, 00:11:31.836 "data_offset": 0, 00:11:31.836 "data_size": 0 00:11:31.836 }, 00:11:31.836 { 00:11:31.836 "name": null, 00:11:31.836 "uuid": "af2426ee-b283-4a60-b07a-b0e83abb7e49", 00:11:31.836 "is_configured": false, 00:11:31.836 "data_offset": 0, 00:11:31.836 "data_size": 63488 00:11:31.836 }, 00:11:31.836 { 00:11:31.836 "name": "BaseBdev3", 00:11:31.836 "uuid": "a6bb11e7-3faa-461e-af36-8216d45610f7", 00:11:31.836 "is_configured": true, 00:11:31.836 "data_offset": 2048, 00:11:31.836 "data_size": 63488 00:11:31.836 }, 00:11:31.836 { 00:11:31.836 "name": "BaseBdev4", 00:11:31.836 "uuid": "c8de8e3f-ae1c-4a07-8e45-4e340d5035ed", 00:11:31.836 "is_configured": true, 00:11:31.836 "data_offset": 2048, 00:11:31.836 "data_size": 63488 00:11:31.836 } 00:11:31.836 ] 00:11:31.836 }' 00:11:31.836 13:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:31.836 13:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.405 13:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:32.405 13:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:32.405 13:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.405 13:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.405 13:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.405 13:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:32.405 13:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:32.405 13:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.405 13:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.405 [2024-11-17 13:21:21.490389] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:32.405 BaseBdev1 00:11:32.405 13:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.406 13:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:32.406 13:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:32.406 13:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:32.406 13:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:32.406 13:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:32.406 13:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:32.406 13:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:32.406 13:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.406 13:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.406 13:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.406 13:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:32.406 13:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.406 13:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.406 [ 00:11:32.406 { 00:11:32.406 "name": "BaseBdev1", 00:11:32.406 "aliases": [ 00:11:32.406 "64c323dc-81df-4d09-b8b6-cda668b4ba1e" 00:11:32.406 ], 00:11:32.406 "product_name": "Malloc disk", 00:11:32.406 "block_size": 512, 00:11:32.406 "num_blocks": 65536, 00:11:32.406 "uuid": "64c323dc-81df-4d09-b8b6-cda668b4ba1e", 00:11:32.406 "assigned_rate_limits": { 00:11:32.406 "rw_ios_per_sec": 0, 00:11:32.406 "rw_mbytes_per_sec": 0, 00:11:32.406 "r_mbytes_per_sec": 0, 00:11:32.406 "w_mbytes_per_sec": 0 00:11:32.406 }, 00:11:32.406 "claimed": true, 00:11:32.406 "claim_type": "exclusive_write", 00:11:32.406 "zoned": false, 00:11:32.406 "supported_io_types": { 00:11:32.406 "read": true, 00:11:32.406 "write": true, 00:11:32.406 "unmap": true, 00:11:32.406 "flush": true, 00:11:32.406 "reset": true, 00:11:32.406 "nvme_admin": false, 00:11:32.406 "nvme_io": false, 00:11:32.406 "nvme_io_md": false, 00:11:32.406 "write_zeroes": true, 00:11:32.406 "zcopy": true, 00:11:32.406 "get_zone_info": false, 00:11:32.406 "zone_management": false, 00:11:32.406 "zone_append": false, 00:11:32.406 "compare": false, 00:11:32.406 "compare_and_write": false, 00:11:32.406 "abort": true, 00:11:32.406 "seek_hole": false, 00:11:32.406 "seek_data": false, 00:11:32.406 "copy": true, 00:11:32.406 "nvme_iov_md": false 00:11:32.406 }, 00:11:32.406 "memory_domains": [ 00:11:32.406 { 00:11:32.406 "dma_device_id": "system", 00:11:32.406 "dma_device_type": 1 00:11:32.406 }, 00:11:32.406 { 00:11:32.406 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:32.406 "dma_device_type": 2 00:11:32.406 } 00:11:32.406 ], 00:11:32.406 "driver_specific": {} 00:11:32.406 } 00:11:32.406 ] 00:11:32.406 13:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.406 13:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:32.406 13:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:32.406 13:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:32.406 13:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:32.406 13:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:32.406 13:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:32.406 13:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:32.406 13:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:32.406 13:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:32.406 13:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:32.406 13:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:32.406 13:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:32.406 13:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.406 13:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:32.406 13:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.406 13:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.406 13:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:32.406 "name": "Existed_Raid", 00:11:32.406 "uuid": "415bdcb4-5179-4881-a589-df5683c9a9ee", 00:11:32.406 "strip_size_kb": 0, 00:11:32.406 "state": "configuring", 00:11:32.406 "raid_level": "raid1", 00:11:32.406 "superblock": true, 00:11:32.406 "num_base_bdevs": 4, 00:11:32.406 "num_base_bdevs_discovered": 3, 00:11:32.406 "num_base_bdevs_operational": 4, 00:11:32.406 "base_bdevs_list": [ 00:11:32.406 { 00:11:32.406 "name": "BaseBdev1", 00:11:32.406 "uuid": "64c323dc-81df-4d09-b8b6-cda668b4ba1e", 00:11:32.406 "is_configured": true, 00:11:32.406 "data_offset": 2048, 00:11:32.406 "data_size": 63488 00:11:32.406 }, 00:11:32.406 { 00:11:32.406 "name": null, 00:11:32.406 "uuid": "af2426ee-b283-4a60-b07a-b0e83abb7e49", 00:11:32.406 "is_configured": false, 00:11:32.406 "data_offset": 0, 00:11:32.406 "data_size": 63488 00:11:32.406 }, 00:11:32.406 { 00:11:32.406 "name": "BaseBdev3", 00:11:32.406 "uuid": "a6bb11e7-3faa-461e-af36-8216d45610f7", 00:11:32.406 "is_configured": true, 00:11:32.406 "data_offset": 2048, 00:11:32.406 "data_size": 63488 00:11:32.406 }, 00:11:32.406 { 00:11:32.406 "name": "BaseBdev4", 00:11:32.406 "uuid": "c8de8e3f-ae1c-4a07-8e45-4e340d5035ed", 00:11:32.406 "is_configured": true, 00:11:32.406 "data_offset": 2048, 00:11:32.406 "data_size": 63488 00:11:32.406 } 00:11:32.406 ] 00:11:32.406 }' 00:11:32.406 13:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:32.406 13:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.976 13:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:32.976 13:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.976 13:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.976 13:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:32.976 13:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.976 13:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:32.976 13:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:32.976 13:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.976 13:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.976 [2024-11-17 13:21:22.049539] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:32.977 13:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.977 13:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:32.977 13:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:32.977 13:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:32.977 13:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:32.977 13:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:32.977 13:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:32.977 13:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:32.977 13:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:32.977 13:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:32.977 13:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:32.977 13:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:32.977 13:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:32.977 13:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.977 13:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.977 13:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.977 13:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:32.977 "name": "Existed_Raid", 00:11:32.977 "uuid": "415bdcb4-5179-4881-a589-df5683c9a9ee", 00:11:32.977 "strip_size_kb": 0, 00:11:32.977 "state": "configuring", 00:11:32.977 "raid_level": "raid1", 00:11:32.977 "superblock": true, 00:11:32.977 "num_base_bdevs": 4, 00:11:32.977 "num_base_bdevs_discovered": 2, 00:11:32.977 "num_base_bdevs_operational": 4, 00:11:32.977 "base_bdevs_list": [ 00:11:32.977 { 00:11:32.977 "name": "BaseBdev1", 00:11:32.977 "uuid": "64c323dc-81df-4d09-b8b6-cda668b4ba1e", 00:11:32.977 "is_configured": true, 00:11:32.977 "data_offset": 2048, 00:11:32.977 "data_size": 63488 00:11:32.977 }, 00:11:32.977 { 00:11:32.977 "name": null, 00:11:32.977 "uuid": "af2426ee-b283-4a60-b07a-b0e83abb7e49", 00:11:32.977 "is_configured": false, 00:11:32.977 "data_offset": 0, 00:11:32.977 "data_size": 63488 00:11:32.977 }, 00:11:32.977 { 00:11:32.977 "name": null, 00:11:32.977 "uuid": "a6bb11e7-3faa-461e-af36-8216d45610f7", 00:11:32.977 "is_configured": false, 00:11:32.977 "data_offset": 0, 00:11:32.977 "data_size": 63488 00:11:32.977 }, 00:11:32.977 { 00:11:32.977 "name": "BaseBdev4", 00:11:32.977 "uuid": "c8de8e3f-ae1c-4a07-8e45-4e340d5035ed", 00:11:32.977 "is_configured": true, 00:11:32.977 "data_offset": 2048, 00:11:32.977 "data_size": 63488 00:11:32.977 } 00:11:32.977 ] 00:11:32.977 }' 00:11:32.977 13:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:32.977 13:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.547 13:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:33.547 13:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:33.547 13:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.547 13:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.547 13:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.547 13:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:33.547 13:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:33.547 13:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.547 13:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.547 [2024-11-17 13:21:22.548663] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:33.547 13:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.547 13:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:33.547 13:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:33.547 13:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:33.547 13:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:33.547 13:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:33.547 13:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:33.547 13:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:33.547 13:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:33.547 13:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:33.547 13:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:33.547 13:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:33.547 13:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:33.547 13:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.547 13:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.547 13:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.547 13:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:33.547 "name": "Existed_Raid", 00:11:33.547 "uuid": "415bdcb4-5179-4881-a589-df5683c9a9ee", 00:11:33.547 "strip_size_kb": 0, 00:11:33.547 "state": "configuring", 00:11:33.547 "raid_level": "raid1", 00:11:33.547 "superblock": true, 00:11:33.547 "num_base_bdevs": 4, 00:11:33.547 "num_base_bdevs_discovered": 3, 00:11:33.547 "num_base_bdevs_operational": 4, 00:11:33.547 "base_bdevs_list": [ 00:11:33.547 { 00:11:33.547 "name": "BaseBdev1", 00:11:33.547 "uuid": "64c323dc-81df-4d09-b8b6-cda668b4ba1e", 00:11:33.547 "is_configured": true, 00:11:33.547 "data_offset": 2048, 00:11:33.547 "data_size": 63488 00:11:33.547 }, 00:11:33.547 { 00:11:33.547 "name": null, 00:11:33.547 "uuid": "af2426ee-b283-4a60-b07a-b0e83abb7e49", 00:11:33.547 "is_configured": false, 00:11:33.547 "data_offset": 0, 00:11:33.547 "data_size": 63488 00:11:33.547 }, 00:11:33.547 { 00:11:33.547 "name": "BaseBdev3", 00:11:33.547 "uuid": "a6bb11e7-3faa-461e-af36-8216d45610f7", 00:11:33.547 "is_configured": true, 00:11:33.547 "data_offset": 2048, 00:11:33.547 "data_size": 63488 00:11:33.547 }, 00:11:33.547 { 00:11:33.547 "name": "BaseBdev4", 00:11:33.547 "uuid": "c8de8e3f-ae1c-4a07-8e45-4e340d5035ed", 00:11:33.547 "is_configured": true, 00:11:33.547 "data_offset": 2048, 00:11:33.547 "data_size": 63488 00:11:33.547 } 00:11:33.547 ] 00:11:33.547 }' 00:11:33.547 13:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:33.547 13:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.811 13:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:33.811 13:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.811 13:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.811 13:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:33.811 13:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.811 13:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:33.812 13:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:33.812 13:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.812 13:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.077 [2024-11-17 13:21:23.035874] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:34.077 13:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.077 13:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:34.077 13:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:34.077 13:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:34.077 13:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:34.077 13:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:34.077 13:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:34.077 13:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:34.077 13:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:34.077 13:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:34.077 13:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:34.077 13:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:34.077 13:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:34.077 13:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.077 13:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.077 13:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.077 13:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:34.077 "name": "Existed_Raid", 00:11:34.077 "uuid": "415bdcb4-5179-4881-a589-df5683c9a9ee", 00:11:34.077 "strip_size_kb": 0, 00:11:34.077 "state": "configuring", 00:11:34.077 "raid_level": "raid1", 00:11:34.077 "superblock": true, 00:11:34.077 "num_base_bdevs": 4, 00:11:34.077 "num_base_bdevs_discovered": 2, 00:11:34.077 "num_base_bdevs_operational": 4, 00:11:34.077 "base_bdevs_list": [ 00:11:34.077 { 00:11:34.077 "name": null, 00:11:34.077 "uuid": "64c323dc-81df-4d09-b8b6-cda668b4ba1e", 00:11:34.077 "is_configured": false, 00:11:34.077 "data_offset": 0, 00:11:34.077 "data_size": 63488 00:11:34.077 }, 00:11:34.077 { 00:11:34.077 "name": null, 00:11:34.077 "uuid": "af2426ee-b283-4a60-b07a-b0e83abb7e49", 00:11:34.077 "is_configured": false, 00:11:34.077 "data_offset": 0, 00:11:34.077 "data_size": 63488 00:11:34.077 }, 00:11:34.077 { 00:11:34.077 "name": "BaseBdev3", 00:11:34.077 "uuid": "a6bb11e7-3faa-461e-af36-8216d45610f7", 00:11:34.077 "is_configured": true, 00:11:34.077 "data_offset": 2048, 00:11:34.077 "data_size": 63488 00:11:34.077 }, 00:11:34.077 { 00:11:34.077 "name": "BaseBdev4", 00:11:34.078 "uuid": "c8de8e3f-ae1c-4a07-8e45-4e340d5035ed", 00:11:34.078 "is_configured": true, 00:11:34.078 "data_offset": 2048, 00:11:34.078 "data_size": 63488 00:11:34.078 } 00:11:34.078 ] 00:11:34.078 }' 00:11:34.078 13:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:34.078 13:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.337 13:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:34.337 13:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:34.337 13:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.337 13:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.338 13:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.597 13:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:34.597 13:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:34.597 13:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.597 13:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.597 [2024-11-17 13:21:23.572050] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:34.597 13:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.597 13:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:34.597 13:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:34.597 13:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:34.597 13:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:34.597 13:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:34.597 13:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:34.597 13:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:34.597 13:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:34.597 13:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:34.597 13:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:34.597 13:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:34.597 13:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:34.597 13:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.597 13:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.597 13:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.597 13:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:34.597 "name": "Existed_Raid", 00:11:34.597 "uuid": "415bdcb4-5179-4881-a589-df5683c9a9ee", 00:11:34.597 "strip_size_kb": 0, 00:11:34.597 "state": "configuring", 00:11:34.597 "raid_level": "raid1", 00:11:34.597 "superblock": true, 00:11:34.597 "num_base_bdevs": 4, 00:11:34.597 "num_base_bdevs_discovered": 3, 00:11:34.597 "num_base_bdevs_operational": 4, 00:11:34.597 "base_bdevs_list": [ 00:11:34.597 { 00:11:34.597 "name": null, 00:11:34.597 "uuid": "64c323dc-81df-4d09-b8b6-cda668b4ba1e", 00:11:34.597 "is_configured": false, 00:11:34.597 "data_offset": 0, 00:11:34.597 "data_size": 63488 00:11:34.597 }, 00:11:34.597 { 00:11:34.597 "name": "BaseBdev2", 00:11:34.597 "uuid": "af2426ee-b283-4a60-b07a-b0e83abb7e49", 00:11:34.597 "is_configured": true, 00:11:34.597 "data_offset": 2048, 00:11:34.598 "data_size": 63488 00:11:34.598 }, 00:11:34.598 { 00:11:34.598 "name": "BaseBdev3", 00:11:34.598 "uuid": "a6bb11e7-3faa-461e-af36-8216d45610f7", 00:11:34.598 "is_configured": true, 00:11:34.598 "data_offset": 2048, 00:11:34.598 "data_size": 63488 00:11:34.598 }, 00:11:34.598 { 00:11:34.598 "name": "BaseBdev4", 00:11:34.598 "uuid": "c8de8e3f-ae1c-4a07-8e45-4e340d5035ed", 00:11:34.598 "is_configured": true, 00:11:34.598 "data_offset": 2048, 00:11:34.598 "data_size": 63488 00:11:34.598 } 00:11:34.598 ] 00:11:34.598 }' 00:11:34.598 13:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:34.598 13:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.858 13:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:34.858 13:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:34.858 13:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.858 13:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.858 13:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.858 13:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:34.858 13:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:34.858 13:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.858 13:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:34.858 13:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.118 13:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.118 13:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 64c323dc-81df-4d09-b8b6-cda668b4ba1e 00:11:35.118 13:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.118 13:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.118 [2024-11-17 13:21:24.150861] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:35.118 [2024-11-17 13:21:24.151076] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:35.118 [2024-11-17 13:21:24.151093] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:35.118 [2024-11-17 13:21:24.151406] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:35.118 [2024-11-17 13:21:24.151563] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:35.118 [2024-11-17 13:21:24.151573] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:35.118 NewBaseBdev 00:11:35.118 [2024-11-17 13:21:24.151702] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:35.118 13:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.118 13:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:35.118 13:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:11:35.118 13:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:35.118 13:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:35.118 13:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:35.118 13:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:35.118 13:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:35.118 13:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.118 13:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.118 13:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.118 13:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:35.118 13:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.118 13:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.118 [ 00:11:35.118 { 00:11:35.118 "name": "NewBaseBdev", 00:11:35.118 "aliases": [ 00:11:35.118 "64c323dc-81df-4d09-b8b6-cda668b4ba1e" 00:11:35.118 ], 00:11:35.118 "product_name": "Malloc disk", 00:11:35.118 "block_size": 512, 00:11:35.118 "num_blocks": 65536, 00:11:35.118 "uuid": "64c323dc-81df-4d09-b8b6-cda668b4ba1e", 00:11:35.118 "assigned_rate_limits": { 00:11:35.118 "rw_ios_per_sec": 0, 00:11:35.118 "rw_mbytes_per_sec": 0, 00:11:35.118 "r_mbytes_per_sec": 0, 00:11:35.118 "w_mbytes_per_sec": 0 00:11:35.118 }, 00:11:35.118 "claimed": true, 00:11:35.118 "claim_type": "exclusive_write", 00:11:35.118 "zoned": false, 00:11:35.118 "supported_io_types": { 00:11:35.118 "read": true, 00:11:35.118 "write": true, 00:11:35.118 "unmap": true, 00:11:35.118 "flush": true, 00:11:35.119 "reset": true, 00:11:35.119 "nvme_admin": false, 00:11:35.119 "nvme_io": false, 00:11:35.119 "nvme_io_md": false, 00:11:35.119 "write_zeroes": true, 00:11:35.119 "zcopy": true, 00:11:35.119 "get_zone_info": false, 00:11:35.119 "zone_management": false, 00:11:35.119 "zone_append": false, 00:11:35.119 "compare": false, 00:11:35.119 "compare_and_write": false, 00:11:35.119 "abort": true, 00:11:35.119 "seek_hole": false, 00:11:35.119 "seek_data": false, 00:11:35.119 "copy": true, 00:11:35.119 "nvme_iov_md": false 00:11:35.119 }, 00:11:35.119 "memory_domains": [ 00:11:35.119 { 00:11:35.119 "dma_device_id": "system", 00:11:35.119 "dma_device_type": 1 00:11:35.119 }, 00:11:35.119 { 00:11:35.119 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:35.119 "dma_device_type": 2 00:11:35.119 } 00:11:35.119 ], 00:11:35.119 "driver_specific": {} 00:11:35.119 } 00:11:35.119 ] 00:11:35.119 13:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.119 13:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:35.119 13:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:11:35.119 13:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:35.119 13:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:35.119 13:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:35.119 13:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:35.119 13:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:35.119 13:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:35.119 13:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:35.119 13:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:35.119 13:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:35.119 13:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:35.119 13:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:35.119 13:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.119 13:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.119 13:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.119 13:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:35.119 "name": "Existed_Raid", 00:11:35.119 "uuid": "415bdcb4-5179-4881-a589-df5683c9a9ee", 00:11:35.119 "strip_size_kb": 0, 00:11:35.119 "state": "online", 00:11:35.119 "raid_level": "raid1", 00:11:35.119 "superblock": true, 00:11:35.119 "num_base_bdevs": 4, 00:11:35.119 "num_base_bdevs_discovered": 4, 00:11:35.119 "num_base_bdevs_operational": 4, 00:11:35.119 "base_bdevs_list": [ 00:11:35.119 { 00:11:35.119 "name": "NewBaseBdev", 00:11:35.119 "uuid": "64c323dc-81df-4d09-b8b6-cda668b4ba1e", 00:11:35.119 "is_configured": true, 00:11:35.119 "data_offset": 2048, 00:11:35.119 "data_size": 63488 00:11:35.119 }, 00:11:35.119 { 00:11:35.119 "name": "BaseBdev2", 00:11:35.119 "uuid": "af2426ee-b283-4a60-b07a-b0e83abb7e49", 00:11:35.119 "is_configured": true, 00:11:35.119 "data_offset": 2048, 00:11:35.119 "data_size": 63488 00:11:35.119 }, 00:11:35.119 { 00:11:35.119 "name": "BaseBdev3", 00:11:35.119 "uuid": "a6bb11e7-3faa-461e-af36-8216d45610f7", 00:11:35.119 "is_configured": true, 00:11:35.119 "data_offset": 2048, 00:11:35.119 "data_size": 63488 00:11:35.119 }, 00:11:35.119 { 00:11:35.119 "name": "BaseBdev4", 00:11:35.119 "uuid": "c8de8e3f-ae1c-4a07-8e45-4e340d5035ed", 00:11:35.119 "is_configured": true, 00:11:35.119 "data_offset": 2048, 00:11:35.119 "data_size": 63488 00:11:35.119 } 00:11:35.119 ] 00:11:35.119 }' 00:11:35.119 13:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:35.119 13:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.689 13:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:35.689 13:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:35.689 13:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:35.689 13:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:35.689 13:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:35.689 13:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:35.689 13:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:35.689 13:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:35.689 13:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.689 13:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.689 [2024-11-17 13:21:24.626431] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:35.689 13:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.689 13:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:35.689 "name": "Existed_Raid", 00:11:35.689 "aliases": [ 00:11:35.689 "415bdcb4-5179-4881-a589-df5683c9a9ee" 00:11:35.689 ], 00:11:35.689 "product_name": "Raid Volume", 00:11:35.689 "block_size": 512, 00:11:35.689 "num_blocks": 63488, 00:11:35.689 "uuid": "415bdcb4-5179-4881-a589-df5683c9a9ee", 00:11:35.689 "assigned_rate_limits": { 00:11:35.689 "rw_ios_per_sec": 0, 00:11:35.689 "rw_mbytes_per_sec": 0, 00:11:35.689 "r_mbytes_per_sec": 0, 00:11:35.689 "w_mbytes_per_sec": 0 00:11:35.689 }, 00:11:35.689 "claimed": false, 00:11:35.689 "zoned": false, 00:11:35.689 "supported_io_types": { 00:11:35.689 "read": true, 00:11:35.689 "write": true, 00:11:35.689 "unmap": false, 00:11:35.689 "flush": false, 00:11:35.689 "reset": true, 00:11:35.689 "nvme_admin": false, 00:11:35.689 "nvme_io": false, 00:11:35.689 "nvme_io_md": false, 00:11:35.689 "write_zeroes": true, 00:11:35.689 "zcopy": false, 00:11:35.689 "get_zone_info": false, 00:11:35.689 "zone_management": false, 00:11:35.689 "zone_append": false, 00:11:35.689 "compare": false, 00:11:35.689 "compare_and_write": false, 00:11:35.689 "abort": false, 00:11:35.689 "seek_hole": false, 00:11:35.689 "seek_data": false, 00:11:35.689 "copy": false, 00:11:35.689 "nvme_iov_md": false 00:11:35.689 }, 00:11:35.689 "memory_domains": [ 00:11:35.689 { 00:11:35.689 "dma_device_id": "system", 00:11:35.689 "dma_device_type": 1 00:11:35.689 }, 00:11:35.689 { 00:11:35.689 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:35.689 "dma_device_type": 2 00:11:35.689 }, 00:11:35.689 { 00:11:35.689 "dma_device_id": "system", 00:11:35.689 "dma_device_type": 1 00:11:35.689 }, 00:11:35.689 { 00:11:35.689 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:35.689 "dma_device_type": 2 00:11:35.689 }, 00:11:35.689 { 00:11:35.689 "dma_device_id": "system", 00:11:35.689 "dma_device_type": 1 00:11:35.689 }, 00:11:35.689 { 00:11:35.689 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:35.689 "dma_device_type": 2 00:11:35.689 }, 00:11:35.689 { 00:11:35.689 "dma_device_id": "system", 00:11:35.689 "dma_device_type": 1 00:11:35.689 }, 00:11:35.689 { 00:11:35.689 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:35.689 "dma_device_type": 2 00:11:35.689 } 00:11:35.689 ], 00:11:35.689 "driver_specific": { 00:11:35.689 "raid": { 00:11:35.689 "uuid": "415bdcb4-5179-4881-a589-df5683c9a9ee", 00:11:35.689 "strip_size_kb": 0, 00:11:35.689 "state": "online", 00:11:35.689 "raid_level": "raid1", 00:11:35.689 "superblock": true, 00:11:35.689 "num_base_bdevs": 4, 00:11:35.689 "num_base_bdevs_discovered": 4, 00:11:35.689 "num_base_bdevs_operational": 4, 00:11:35.689 "base_bdevs_list": [ 00:11:35.689 { 00:11:35.689 "name": "NewBaseBdev", 00:11:35.689 "uuid": "64c323dc-81df-4d09-b8b6-cda668b4ba1e", 00:11:35.689 "is_configured": true, 00:11:35.689 "data_offset": 2048, 00:11:35.689 "data_size": 63488 00:11:35.689 }, 00:11:35.689 { 00:11:35.689 "name": "BaseBdev2", 00:11:35.689 "uuid": "af2426ee-b283-4a60-b07a-b0e83abb7e49", 00:11:35.689 "is_configured": true, 00:11:35.689 "data_offset": 2048, 00:11:35.689 "data_size": 63488 00:11:35.689 }, 00:11:35.689 { 00:11:35.689 "name": "BaseBdev3", 00:11:35.689 "uuid": "a6bb11e7-3faa-461e-af36-8216d45610f7", 00:11:35.689 "is_configured": true, 00:11:35.689 "data_offset": 2048, 00:11:35.689 "data_size": 63488 00:11:35.689 }, 00:11:35.689 { 00:11:35.689 "name": "BaseBdev4", 00:11:35.689 "uuid": "c8de8e3f-ae1c-4a07-8e45-4e340d5035ed", 00:11:35.689 "is_configured": true, 00:11:35.689 "data_offset": 2048, 00:11:35.689 "data_size": 63488 00:11:35.689 } 00:11:35.689 ] 00:11:35.689 } 00:11:35.689 } 00:11:35.689 }' 00:11:35.689 13:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:35.689 13:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:35.689 BaseBdev2 00:11:35.689 BaseBdev3 00:11:35.689 BaseBdev4' 00:11:35.689 13:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:35.689 13:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:35.689 13:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:35.689 13:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:35.689 13:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.689 13:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.689 13:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:35.689 13:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.689 13:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:35.689 13:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:35.689 13:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:35.689 13:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:35.689 13:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.689 13:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.689 13:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:35.689 13:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.689 13:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:35.689 13:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:35.689 13:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:35.689 13:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:35.689 13:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:35.690 13:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.690 13:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.690 13:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.690 13:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:35.690 13:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:35.690 13:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:35.690 13:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:35.690 13:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.690 13:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.690 13:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:35.690 13:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.949 13:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:35.949 13:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:35.949 13:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:35.949 13:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.949 13:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.949 [2024-11-17 13:21:24.929640] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:35.949 [2024-11-17 13:21:24.929670] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:35.949 [2024-11-17 13:21:24.929763] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:35.949 [2024-11-17 13:21:24.930082] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:35.949 [2024-11-17 13:21:24.930098] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:35.949 13:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.949 13:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 73755 00:11:35.949 13:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 73755 ']' 00:11:35.949 13:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 73755 00:11:35.949 13:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:11:35.949 13:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:35.949 13:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73755 00:11:35.949 killing process with pid 73755 00:11:35.949 13:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:35.949 13:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:35.949 13:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73755' 00:11:35.949 13:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 73755 00:11:35.949 [2024-11-17 13:21:24.974147] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:35.949 13:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 73755 00:11:36.209 [2024-11-17 13:21:25.361581] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:37.590 13:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:11:37.590 00:11:37.590 real 0m11.582s 00:11:37.590 user 0m18.384s 00:11:37.590 sys 0m2.157s 00:11:37.590 ************************************ 00:11:37.590 END TEST raid_state_function_test_sb 00:11:37.590 ************************************ 00:11:37.590 13:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:37.590 13:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.590 13:21:26 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:11:37.590 13:21:26 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:37.590 13:21:26 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:37.590 13:21:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:37.590 ************************************ 00:11:37.590 START TEST raid_superblock_test 00:11:37.590 ************************************ 00:11:37.590 13:21:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 4 00:11:37.590 13:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:11:37.590 13:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:11:37.590 13:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:11:37.590 13:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:11:37.590 13:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:11:37.590 13:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:11:37.590 13:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:11:37.590 13:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:11:37.590 13:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:11:37.590 13:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:11:37.590 13:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:11:37.590 13:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:11:37.590 13:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:11:37.590 13:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:11:37.590 13:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:11:37.590 13:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=74425 00:11:37.590 13:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:11:37.590 13:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 74425 00:11:37.590 13:21:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 74425 ']' 00:11:37.590 13:21:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:37.590 13:21:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:37.590 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:37.590 13:21:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:37.590 13:21:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:37.590 13:21:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.590 [2024-11-17 13:21:26.614201] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:11:37.590 [2024-11-17 13:21:26.614418] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74425 ] 00:11:37.590 [2024-11-17 13:21:26.770064] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:37.849 [2024-11-17 13:21:26.879817] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:38.108 [2024-11-17 13:21:27.077017] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:38.108 [2024-11-17 13:21:27.077160] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:38.368 13:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:38.368 13:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:11:38.368 13:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:11:38.368 13:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:38.368 13:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:11:38.368 13:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:11:38.368 13:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:11:38.368 13:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:38.368 13:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:38.368 13:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:38.368 13:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:11:38.368 13:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.368 13:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.368 malloc1 00:11:38.368 13:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.368 13:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:38.368 13:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.368 13:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.368 [2024-11-17 13:21:27.493318] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:38.368 [2024-11-17 13:21:27.493442] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:38.368 [2024-11-17 13:21:27.493472] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:38.368 [2024-11-17 13:21:27.493494] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:38.368 [2024-11-17 13:21:27.495691] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:38.368 [2024-11-17 13:21:27.495736] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:38.368 pt1 00:11:38.368 13:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.368 13:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:38.368 13:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:38.368 13:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:11:38.368 13:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:11:38.368 13:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:11:38.368 13:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:38.368 13:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:38.368 13:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:38.368 13:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:11:38.368 13:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.368 13:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.368 malloc2 00:11:38.368 13:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.369 13:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:38.369 13:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.369 13:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.369 [2024-11-17 13:21:27.547081] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:38.369 [2024-11-17 13:21:27.547183] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:38.369 [2024-11-17 13:21:27.547221] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:38.369 [2024-11-17 13:21:27.547288] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:38.369 [2024-11-17 13:21:27.549329] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:38.369 [2024-11-17 13:21:27.549393] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:38.369 pt2 00:11:38.369 13:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.369 13:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:38.369 13:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:38.369 13:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:11:38.369 13:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:11:38.369 13:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:11:38.369 13:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:38.369 13:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:38.369 13:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:38.369 13:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:11:38.369 13:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.369 13:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.629 malloc3 00:11:38.629 13:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.629 13:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:38.629 13:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.629 13:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.629 [2024-11-17 13:21:27.613838] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:38.629 [2024-11-17 13:21:27.613942] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:38.629 [2024-11-17 13:21:27.613982] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:11:38.629 [2024-11-17 13:21:27.614014] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:38.629 [2024-11-17 13:21:27.616042] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:38.629 [2024-11-17 13:21:27.616109] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:38.629 pt3 00:11:38.629 13:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.629 13:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:38.629 13:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:38.629 13:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:11:38.629 13:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:11:38.629 13:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:11:38.629 13:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:38.629 13:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:38.629 13:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:38.629 13:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:11:38.629 13:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.629 13:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.629 malloc4 00:11:38.629 13:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.629 13:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:38.629 13:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.629 13:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.629 [2024-11-17 13:21:27.673636] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:38.629 [2024-11-17 13:21:27.673695] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:38.629 [2024-11-17 13:21:27.673728] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:11:38.629 [2024-11-17 13:21:27.673737] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:38.629 [2024-11-17 13:21:27.675756] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:38.629 pt4 00:11:38.629 [2024-11-17 13:21:27.675832] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:38.629 13:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.629 13:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:38.629 13:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:38.629 13:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:11:38.629 13:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.629 13:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.629 [2024-11-17 13:21:27.685644] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:38.629 [2024-11-17 13:21:27.687396] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:38.629 [2024-11-17 13:21:27.687517] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:38.629 [2024-11-17 13:21:27.687574] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:38.629 [2024-11-17 13:21:27.687817] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:38.629 [2024-11-17 13:21:27.687866] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:38.629 [2024-11-17 13:21:27.688172] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:38.629 [2024-11-17 13:21:27.688401] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:38.629 [2024-11-17 13:21:27.688451] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:11:38.629 [2024-11-17 13:21:27.688654] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:38.629 13:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.629 13:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:11:38.630 13:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:38.630 13:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:38.630 13:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:38.630 13:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:38.630 13:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:38.630 13:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:38.630 13:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:38.630 13:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:38.630 13:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:38.630 13:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:38.630 13:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.630 13:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:38.630 13:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.630 13:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.630 13:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:38.630 "name": "raid_bdev1", 00:11:38.630 "uuid": "3bad0d42-a655-4050-b1c8-34cd54ef927e", 00:11:38.630 "strip_size_kb": 0, 00:11:38.630 "state": "online", 00:11:38.630 "raid_level": "raid1", 00:11:38.630 "superblock": true, 00:11:38.630 "num_base_bdevs": 4, 00:11:38.630 "num_base_bdevs_discovered": 4, 00:11:38.630 "num_base_bdevs_operational": 4, 00:11:38.630 "base_bdevs_list": [ 00:11:38.630 { 00:11:38.630 "name": "pt1", 00:11:38.630 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:38.630 "is_configured": true, 00:11:38.630 "data_offset": 2048, 00:11:38.630 "data_size": 63488 00:11:38.630 }, 00:11:38.630 { 00:11:38.630 "name": "pt2", 00:11:38.630 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:38.630 "is_configured": true, 00:11:38.630 "data_offset": 2048, 00:11:38.630 "data_size": 63488 00:11:38.630 }, 00:11:38.630 { 00:11:38.630 "name": "pt3", 00:11:38.630 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:38.630 "is_configured": true, 00:11:38.630 "data_offset": 2048, 00:11:38.630 "data_size": 63488 00:11:38.630 }, 00:11:38.630 { 00:11:38.630 "name": "pt4", 00:11:38.630 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:38.630 "is_configured": true, 00:11:38.630 "data_offset": 2048, 00:11:38.630 "data_size": 63488 00:11:38.630 } 00:11:38.630 ] 00:11:38.630 }' 00:11:38.630 13:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:38.630 13:21:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.199 13:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:11:39.200 13:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:39.200 13:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:39.200 13:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:39.200 13:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:39.200 13:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:39.200 13:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:39.200 13:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:39.200 13:21:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.200 13:21:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.200 [2024-11-17 13:21:28.129164] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:39.200 13:21:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.200 13:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:39.200 "name": "raid_bdev1", 00:11:39.200 "aliases": [ 00:11:39.200 "3bad0d42-a655-4050-b1c8-34cd54ef927e" 00:11:39.200 ], 00:11:39.200 "product_name": "Raid Volume", 00:11:39.200 "block_size": 512, 00:11:39.200 "num_blocks": 63488, 00:11:39.200 "uuid": "3bad0d42-a655-4050-b1c8-34cd54ef927e", 00:11:39.200 "assigned_rate_limits": { 00:11:39.200 "rw_ios_per_sec": 0, 00:11:39.200 "rw_mbytes_per_sec": 0, 00:11:39.200 "r_mbytes_per_sec": 0, 00:11:39.200 "w_mbytes_per_sec": 0 00:11:39.200 }, 00:11:39.200 "claimed": false, 00:11:39.200 "zoned": false, 00:11:39.200 "supported_io_types": { 00:11:39.200 "read": true, 00:11:39.200 "write": true, 00:11:39.200 "unmap": false, 00:11:39.200 "flush": false, 00:11:39.200 "reset": true, 00:11:39.200 "nvme_admin": false, 00:11:39.200 "nvme_io": false, 00:11:39.200 "nvme_io_md": false, 00:11:39.200 "write_zeroes": true, 00:11:39.200 "zcopy": false, 00:11:39.200 "get_zone_info": false, 00:11:39.200 "zone_management": false, 00:11:39.200 "zone_append": false, 00:11:39.200 "compare": false, 00:11:39.200 "compare_and_write": false, 00:11:39.200 "abort": false, 00:11:39.200 "seek_hole": false, 00:11:39.200 "seek_data": false, 00:11:39.200 "copy": false, 00:11:39.200 "nvme_iov_md": false 00:11:39.200 }, 00:11:39.200 "memory_domains": [ 00:11:39.200 { 00:11:39.200 "dma_device_id": "system", 00:11:39.200 "dma_device_type": 1 00:11:39.200 }, 00:11:39.200 { 00:11:39.200 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:39.200 "dma_device_type": 2 00:11:39.200 }, 00:11:39.200 { 00:11:39.200 "dma_device_id": "system", 00:11:39.200 "dma_device_type": 1 00:11:39.200 }, 00:11:39.200 { 00:11:39.200 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:39.200 "dma_device_type": 2 00:11:39.200 }, 00:11:39.200 { 00:11:39.200 "dma_device_id": "system", 00:11:39.200 "dma_device_type": 1 00:11:39.200 }, 00:11:39.200 { 00:11:39.200 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:39.200 "dma_device_type": 2 00:11:39.200 }, 00:11:39.200 { 00:11:39.200 "dma_device_id": "system", 00:11:39.200 "dma_device_type": 1 00:11:39.200 }, 00:11:39.200 { 00:11:39.200 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:39.200 "dma_device_type": 2 00:11:39.200 } 00:11:39.200 ], 00:11:39.200 "driver_specific": { 00:11:39.200 "raid": { 00:11:39.200 "uuid": "3bad0d42-a655-4050-b1c8-34cd54ef927e", 00:11:39.200 "strip_size_kb": 0, 00:11:39.200 "state": "online", 00:11:39.200 "raid_level": "raid1", 00:11:39.200 "superblock": true, 00:11:39.200 "num_base_bdevs": 4, 00:11:39.200 "num_base_bdevs_discovered": 4, 00:11:39.200 "num_base_bdevs_operational": 4, 00:11:39.200 "base_bdevs_list": [ 00:11:39.200 { 00:11:39.200 "name": "pt1", 00:11:39.200 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:39.200 "is_configured": true, 00:11:39.200 "data_offset": 2048, 00:11:39.200 "data_size": 63488 00:11:39.200 }, 00:11:39.200 { 00:11:39.200 "name": "pt2", 00:11:39.200 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:39.200 "is_configured": true, 00:11:39.200 "data_offset": 2048, 00:11:39.200 "data_size": 63488 00:11:39.200 }, 00:11:39.200 { 00:11:39.200 "name": "pt3", 00:11:39.200 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:39.200 "is_configured": true, 00:11:39.200 "data_offset": 2048, 00:11:39.200 "data_size": 63488 00:11:39.200 }, 00:11:39.200 { 00:11:39.200 "name": "pt4", 00:11:39.200 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:39.200 "is_configured": true, 00:11:39.200 "data_offset": 2048, 00:11:39.200 "data_size": 63488 00:11:39.200 } 00:11:39.200 ] 00:11:39.200 } 00:11:39.200 } 00:11:39.200 }' 00:11:39.200 13:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:39.200 13:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:39.200 pt2 00:11:39.200 pt3 00:11:39.200 pt4' 00:11:39.200 13:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:39.200 13:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:39.200 13:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:39.200 13:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:39.200 13:21:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.200 13:21:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.200 13:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:39.200 13:21:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.200 13:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:39.200 13:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:39.200 13:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:39.200 13:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:39.200 13:21:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.200 13:21:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.200 13:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:39.200 13:21:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.200 13:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:39.200 13:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:39.200 13:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:39.200 13:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:39.200 13:21:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.200 13:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:39.200 13:21:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.200 13:21:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.200 13:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:39.200 13:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:39.200 13:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:39.200 13:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:39.200 13:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:11:39.200 13:21:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.200 13:21:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.200 13:21:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.200 13:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:39.200 13:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:39.200 13:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:39.200 13:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:11:39.200 13:21:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.200 13:21:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.460 [2024-11-17 13:21:28.428624] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:39.461 13:21:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.461 13:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=3bad0d42-a655-4050-b1c8-34cd54ef927e 00:11:39.461 13:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 3bad0d42-a655-4050-b1c8-34cd54ef927e ']' 00:11:39.461 13:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:39.461 13:21:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.461 13:21:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.461 [2024-11-17 13:21:28.472276] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:39.461 [2024-11-17 13:21:28.472299] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:39.461 [2024-11-17 13:21:28.472369] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:39.461 [2024-11-17 13:21:28.472448] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:39.461 [2024-11-17 13:21:28.472463] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:11:39.461 13:21:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.461 13:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.461 13:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:11:39.461 13:21:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.461 13:21:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.461 13:21:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.461 13:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:11:39.461 13:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:11:39.461 13:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:39.461 13:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:11:39.461 13:21:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.461 13:21:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.461 13:21:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.461 13:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:39.461 13:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:11:39.461 13:21:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.461 13:21:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.461 13:21:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.461 13:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:39.461 13:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:11:39.461 13:21:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.461 13:21:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.461 13:21:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.461 13:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:39.461 13:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:11:39.461 13:21:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.461 13:21:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.461 13:21:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.461 13:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:11:39.461 13:21:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.461 13:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:11:39.461 13:21:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.461 13:21:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.461 13:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:11:39.461 13:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:39.461 13:21:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:11:39.461 13:21:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:39.461 13:21:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:11:39.461 13:21:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:39.461 13:21:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:11:39.461 13:21:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:39.461 13:21:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:39.461 13:21:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.461 13:21:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.461 [2024-11-17 13:21:28.640002] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:11:39.461 [2024-11-17 13:21:28.641890] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:11:39.461 [2024-11-17 13:21:28.641940] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:11:39.461 [2024-11-17 13:21:28.641973] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:11:39.461 [2024-11-17 13:21:28.642021] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:11:39.461 [2024-11-17 13:21:28.642068] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:11:39.461 [2024-11-17 13:21:28.642087] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:11:39.461 [2024-11-17 13:21:28.642105] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:11:39.461 [2024-11-17 13:21:28.642118] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:39.461 [2024-11-17 13:21:28.642128] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:11:39.461 request: 00:11:39.461 { 00:11:39.461 "name": "raid_bdev1", 00:11:39.461 "raid_level": "raid1", 00:11:39.461 "base_bdevs": [ 00:11:39.461 "malloc1", 00:11:39.461 "malloc2", 00:11:39.461 "malloc3", 00:11:39.461 "malloc4" 00:11:39.461 ], 00:11:39.461 "superblock": false, 00:11:39.461 "method": "bdev_raid_create", 00:11:39.461 "req_id": 1 00:11:39.461 } 00:11:39.461 Got JSON-RPC error response 00:11:39.461 response: 00:11:39.461 { 00:11:39.461 "code": -17, 00:11:39.461 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:11:39.461 } 00:11:39.461 13:21:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:11:39.461 13:21:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:11:39.461 13:21:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:39.461 13:21:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:39.461 13:21:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:39.461 13:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.461 13:21:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.461 13:21:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.461 13:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:11:39.461 13:21:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.721 13:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:11:39.721 13:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:11:39.721 13:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:39.721 13:21:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.721 13:21:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.721 [2024-11-17 13:21:28.707888] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:39.721 [2024-11-17 13:21:28.707955] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:39.721 [2024-11-17 13:21:28.707974] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:39.721 [2024-11-17 13:21:28.707986] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:39.721 [2024-11-17 13:21:28.710137] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:39.721 [2024-11-17 13:21:28.710237] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:39.721 [2024-11-17 13:21:28.710336] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:39.721 [2024-11-17 13:21:28.710400] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:39.721 pt1 00:11:39.721 13:21:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.721 13:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:11:39.721 13:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:39.721 13:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:39.721 13:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:39.721 13:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:39.721 13:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:39.721 13:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:39.721 13:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:39.721 13:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:39.721 13:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:39.721 13:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.721 13:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:39.721 13:21:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.721 13:21:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.721 13:21:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.721 13:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:39.721 "name": "raid_bdev1", 00:11:39.721 "uuid": "3bad0d42-a655-4050-b1c8-34cd54ef927e", 00:11:39.721 "strip_size_kb": 0, 00:11:39.721 "state": "configuring", 00:11:39.721 "raid_level": "raid1", 00:11:39.721 "superblock": true, 00:11:39.721 "num_base_bdevs": 4, 00:11:39.721 "num_base_bdevs_discovered": 1, 00:11:39.721 "num_base_bdevs_operational": 4, 00:11:39.721 "base_bdevs_list": [ 00:11:39.721 { 00:11:39.721 "name": "pt1", 00:11:39.721 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:39.721 "is_configured": true, 00:11:39.721 "data_offset": 2048, 00:11:39.721 "data_size": 63488 00:11:39.721 }, 00:11:39.721 { 00:11:39.721 "name": null, 00:11:39.721 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:39.721 "is_configured": false, 00:11:39.721 "data_offset": 2048, 00:11:39.721 "data_size": 63488 00:11:39.721 }, 00:11:39.721 { 00:11:39.722 "name": null, 00:11:39.722 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:39.722 "is_configured": false, 00:11:39.722 "data_offset": 2048, 00:11:39.722 "data_size": 63488 00:11:39.722 }, 00:11:39.722 { 00:11:39.722 "name": null, 00:11:39.722 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:39.722 "is_configured": false, 00:11:39.722 "data_offset": 2048, 00:11:39.722 "data_size": 63488 00:11:39.722 } 00:11:39.722 ] 00:11:39.722 }' 00:11:39.722 13:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:39.722 13:21:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.983 13:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:11:39.983 13:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:39.983 13:21:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.984 13:21:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.984 [2024-11-17 13:21:29.179122] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:39.984 [2024-11-17 13:21:29.179258] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:39.984 [2024-11-17 13:21:29.179298] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:11:39.984 [2024-11-17 13:21:29.179328] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:39.984 [2024-11-17 13:21:29.179855] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:39.984 [2024-11-17 13:21:29.179916] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:39.984 [2024-11-17 13:21:29.180036] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:39.984 [2024-11-17 13:21:29.180097] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:39.984 pt2 00:11:39.984 13:21:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.984 13:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:11:39.984 13:21:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.984 13:21:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.984 [2024-11-17 13:21:29.187085] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:11:39.984 13:21:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.984 13:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:11:39.984 13:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:39.984 13:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:39.984 13:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:39.984 13:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:39.984 13:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:39.984 13:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:39.984 13:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:39.984 13:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:39.984 13:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:39.984 13:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:39.984 13:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.984 13:21:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.984 13:21:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.244 13:21:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.244 13:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:40.244 "name": "raid_bdev1", 00:11:40.244 "uuid": "3bad0d42-a655-4050-b1c8-34cd54ef927e", 00:11:40.244 "strip_size_kb": 0, 00:11:40.244 "state": "configuring", 00:11:40.244 "raid_level": "raid1", 00:11:40.244 "superblock": true, 00:11:40.244 "num_base_bdevs": 4, 00:11:40.244 "num_base_bdevs_discovered": 1, 00:11:40.244 "num_base_bdevs_operational": 4, 00:11:40.244 "base_bdevs_list": [ 00:11:40.244 { 00:11:40.244 "name": "pt1", 00:11:40.244 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:40.244 "is_configured": true, 00:11:40.244 "data_offset": 2048, 00:11:40.244 "data_size": 63488 00:11:40.244 }, 00:11:40.244 { 00:11:40.244 "name": null, 00:11:40.244 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:40.244 "is_configured": false, 00:11:40.244 "data_offset": 0, 00:11:40.244 "data_size": 63488 00:11:40.244 }, 00:11:40.244 { 00:11:40.244 "name": null, 00:11:40.244 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:40.244 "is_configured": false, 00:11:40.244 "data_offset": 2048, 00:11:40.244 "data_size": 63488 00:11:40.244 }, 00:11:40.244 { 00:11:40.244 "name": null, 00:11:40.244 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:40.244 "is_configured": false, 00:11:40.244 "data_offset": 2048, 00:11:40.244 "data_size": 63488 00:11:40.244 } 00:11:40.244 ] 00:11:40.244 }' 00:11:40.244 13:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:40.244 13:21:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.504 13:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:11:40.504 13:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:40.504 13:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:40.504 13:21:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.504 13:21:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.504 [2024-11-17 13:21:29.606368] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:40.505 [2024-11-17 13:21:29.606429] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:40.505 [2024-11-17 13:21:29.606455] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:11:40.505 [2024-11-17 13:21:29.606467] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:40.505 [2024-11-17 13:21:29.606892] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:40.505 [2024-11-17 13:21:29.606947] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:40.505 [2024-11-17 13:21:29.607038] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:40.505 [2024-11-17 13:21:29.607061] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:40.505 pt2 00:11:40.505 13:21:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.505 13:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:40.505 13:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:40.505 13:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:40.505 13:21:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.505 13:21:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.505 [2024-11-17 13:21:29.618333] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:40.505 [2024-11-17 13:21:29.618382] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:40.505 [2024-11-17 13:21:29.618399] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:11:40.505 [2024-11-17 13:21:29.618408] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:40.505 [2024-11-17 13:21:29.618770] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:40.505 [2024-11-17 13:21:29.618792] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:40.505 [2024-11-17 13:21:29.618852] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:40.505 [2024-11-17 13:21:29.618900] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:40.505 pt3 00:11:40.505 13:21:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.505 13:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:40.505 13:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:40.505 13:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:40.505 13:21:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.505 13:21:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.505 [2024-11-17 13:21:29.630286] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:40.505 [2024-11-17 13:21:29.630325] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:40.505 [2024-11-17 13:21:29.630357] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:11:40.505 [2024-11-17 13:21:29.630365] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:40.505 [2024-11-17 13:21:29.630764] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:40.505 [2024-11-17 13:21:29.630785] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:40.505 [2024-11-17 13:21:29.630845] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:11:40.505 [2024-11-17 13:21:29.630861] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:40.505 [2024-11-17 13:21:29.631001] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:40.505 [2024-11-17 13:21:29.631009] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:40.505 [2024-11-17 13:21:29.631308] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:11:40.505 [2024-11-17 13:21:29.631509] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:40.505 [2024-11-17 13:21:29.631529] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:11:40.505 [2024-11-17 13:21:29.631665] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:40.505 pt4 00:11:40.505 13:21:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.505 13:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:40.505 13:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:40.505 13:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:11:40.505 13:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:40.505 13:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:40.505 13:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:40.505 13:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:40.505 13:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:40.505 13:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:40.505 13:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:40.505 13:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:40.505 13:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:40.505 13:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:40.505 13:21:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.505 13:21:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.505 13:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:40.505 13:21:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.505 13:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:40.505 "name": "raid_bdev1", 00:11:40.505 "uuid": "3bad0d42-a655-4050-b1c8-34cd54ef927e", 00:11:40.505 "strip_size_kb": 0, 00:11:40.505 "state": "online", 00:11:40.505 "raid_level": "raid1", 00:11:40.505 "superblock": true, 00:11:40.505 "num_base_bdevs": 4, 00:11:40.505 "num_base_bdevs_discovered": 4, 00:11:40.505 "num_base_bdevs_operational": 4, 00:11:40.505 "base_bdevs_list": [ 00:11:40.505 { 00:11:40.505 "name": "pt1", 00:11:40.505 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:40.505 "is_configured": true, 00:11:40.505 "data_offset": 2048, 00:11:40.505 "data_size": 63488 00:11:40.505 }, 00:11:40.505 { 00:11:40.505 "name": "pt2", 00:11:40.505 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:40.505 "is_configured": true, 00:11:40.505 "data_offset": 2048, 00:11:40.505 "data_size": 63488 00:11:40.505 }, 00:11:40.505 { 00:11:40.505 "name": "pt3", 00:11:40.505 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:40.505 "is_configured": true, 00:11:40.505 "data_offset": 2048, 00:11:40.505 "data_size": 63488 00:11:40.505 }, 00:11:40.505 { 00:11:40.505 "name": "pt4", 00:11:40.505 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:40.505 "is_configured": true, 00:11:40.505 "data_offset": 2048, 00:11:40.505 "data_size": 63488 00:11:40.505 } 00:11:40.505 ] 00:11:40.505 }' 00:11:40.505 13:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:40.505 13:21:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.075 13:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:11:41.075 13:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:41.075 13:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:41.075 13:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:41.075 13:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:41.075 13:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:41.075 13:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:41.075 13:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.075 13:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.075 13:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:41.075 [2024-11-17 13:21:30.069892] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:41.075 13:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.075 13:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:41.075 "name": "raid_bdev1", 00:11:41.075 "aliases": [ 00:11:41.075 "3bad0d42-a655-4050-b1c8-34cd54ef927e" 00:11:41.075 ], 00:11:41.075 "product_name": "Raid Volume", 00:11:41.075 "block_size": 512, 00:11:41.075 "num_blocks": 63488, 00:11:41.075 "uuid": "3bad0d42-a655-4050-b1c8-34cd54ef927e", 00:11:41.075 "assigned_rate_limits": { 00:11:41.075 "rw_ios_per_sec": 0, 00:11:41.075 "rw_mbytes_per_sec": 0, 00:11:41.075 "r_mbytes_per_sec": 0, 00:11:41.075 "w_mbytes_per_sec": 0 00:11:41.075 }, 00:11:41.075 "claimed": false, 00:11:41.075 "zoned": false, 00:11:41.075 "supported_io_types": { 00:11:41.075 "read": true, 00:11:41.075 "write": true, 00:11:41.075 "unmap": false, 00:11:41.075 "flush": false, 00:11:41.075 "reset": true, 00:11:41.075 "nvme_admin": false, 00:11:41.075 "nvme_io": false, 00:11:41.075 "nvme_io_md": false, 00:11:41.075 "write_zeroes": true, 00:11:41.075 "zcopy": false, 00:11:41.075 "get_zone_info": false, 00:11:41.075 "zone_management": false, 00:11:41.075 "zone_append": false, 00:11:41.076 "compare": false, 00:11:41.076 "compare_and_write": false, 00:11:41.076 "abort": false, 00:11:41.076 "seek_hole": false, 00:11:41.076 "seek_data": false, 00:11:41.076 "copy": false, 00:11:41.076 "nvme_iov_md": false 00:11:41.076 }, 00:11:41.076 "memory_domains": [ 00:11:41.076 { 00:11:41.076 "dma_device_id": "system", 00:11:41.076 "dma_device_type": 1 00:11:41.076 }, 00:11:41.076 { 00:11:41.076 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:41.076 "dma_device_type": 2 00:11:41.076 }, 00:11:41.076 { 00:11:41.076 "dma_device_id": "system", 00:11:41.076 "dma_device_type": 1 00:11:41.076 }, 00:11:41.076 { 00:11:41.076 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:41.076 "dma_device_type": 2 00:11:41.076 }, 00:11:41.076 { 00:11:41.076 "dma_device_id": "system", 00:11:41.076 "dma_device_type": 1 00:11:41.076 }, 00:11:41.076 { 00:11:41.076 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:41.076 "dma_device_type": 2 00:11:41.076 }, 00:11:41.076 { 00:11:41.076 "dma_device_id": "system", 00:11:41.076 "dma_device_type": 1 00:11:41.076 }, 00:11:41.076 { 00:11:41.076 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:41.076 "dma_device_type": 2 00:11:41.076 } 00:11:41.076 ], 00:11:41.076 "driver_specific": { 00:11:41.076 "raid": { 00:11:41.076 "uuid": "3bad0d42-a655-4050-b1c8-34cd54ef927e", 00:11:41.076 "strip_size_kb": 0, 00:11:41.076 "state": "online", 00:11:41.076 "raid_level": "raid1", 00:11:41.076 "superblock": true, 00:11:41.076 "num_base_bdevs": 4, 00:11:41.076 "num_base_bdevs_discovered": 4, 00:11:41.076 "num_base_bdevs_operational": 4, 00:11:41.076 "base_bdevs_list": [ 00:11:41.076 { 00:11:41.076 "name": "pt1", 00:11:41.076 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:41.076 "is_configured": true, 00:11:41.076 "data_offset": 2048, 00:11:41.076 "data_size": 63488 00:11:41.076 }, 00:11:41.076 { 00:11:41.076 "name": "pt2", 00:11:41.076 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:41.076 "is_configured": true, 00:11:41.076 "data_offset": 2048, 00:11:41.076 "data_size": 63488 00:11:41.076 }, 00:11:41.076 { 00:11:41.076 "name": "pt3", 00:11:41.076 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:41.076 "is_configured": true, 00:11:41.076 "data_offset": 2048, 00:11:41.076 "data_size": 63488 00:11:41.076 }, 00:11:41.076 { 00:11:41.076 "name": "pt4", 00:11:41.076 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:41.076 "is_configured": true, 00:11:41.076 "data_offset": 2048, 00:11:41.076 "data_size": 63488 00:11:41.076 } 00:11:41.076 ] 00:11:41.076 } 00:11:41.076 } 00:11:41.076 }' 00:11:41.076 13:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:41.076 13:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:41.076 pt2 00:11:41.076 pt3 00:11:41.076 pt4' 00:11:41.076 13:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:41.076 13:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:41.076 13:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:41.076 13:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:41.076 13:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:41.076 13:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.076 13:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.076 13:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.076 13:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:41.076 13:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:41.076 13:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:41.076 13:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:41.076 13:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:41.076 13:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.076 13:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.076 13:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.076 13:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:41.076 13:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:41.076 13:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:41.076 13:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:41.076 13:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:41.076 13:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.076 13:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.336 13:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.336 13:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:41.336 13:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:41.336 13:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:41.336 13:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:11:41.336 13:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.336 13:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.336 13:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:41.336 13:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.336 13:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:41.336 13:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:41.336 13:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:41.336 13:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:11:41.336 13:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.336 13:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.336 [2024-11-17 13:21:30.381332] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:41.336 13:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.337 13:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 3bad0d42-a655-4050-b1c8-34cd54ef927e '!=' 3bad0d42-a655-4050-b1c8-34cd54ef927e ']' 00:11:41.337 13:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:11:41.337 13:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:41.337 13:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:41.337 13:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:11:41.337 13:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.337 13:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.337 [2024-11-17 13:21:30.425003] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:11:41.337 13:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.337 13:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:41.337 13:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:41.337 13:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:41.337 13:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:41.337 13:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:41.337 13:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:41.337 13:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:41.337 13:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:41.337 13:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:41.337 13:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:41.337 13:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:41.337 13:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.337 13:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:41.337 13:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.337 13:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.337 13:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:41.337 "name": "raid_bdev1", 00:11:41.337 "uuid": "3bad0d42-a655-4050-b1c8-34cd54ef927e", 00:11:41.337 "strip_size_kb": 0, 00:11:41.337 "state": "online", 00:11:41.337 "raid_level": "raid1", 00:11:41.337 "superblock": true, 00:11:41.337 "num_base_bdevs": 4, 00:11:41.337 "num_base_bdevs_discovered": 3, 00:11:41.337 "num_base_bdevs_operational": 3, 00:11:41.337 "base_bdevs_list": [ 00:11:41.337 { 00:11:41.337 "name": null, 00:11:41.337 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:41.337 "is_configured": false, 00:11:41.337 "data_offset": 0, 00:11:41.337 "data_size": 63488 00:11:41.337 }, 00:11:41.337 { 00:11:41.337 "name": "pt2", 00:11:41.337 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:41.337 "is_configured": true, 00:11:41.337 "data_offset": 2048, 00:11:41.337 "data_size": 63488 00:11:41.337 }, 00:11:41.337 { 00:11:41.337 "name": "pt3", 00:11:41.337 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:41.337 "is_configured": true, 00:11:41.337 "data_offset": 2048, 00:11:41.337 "data_size": 63488 00:11:41.337 }, 00:11:41.337 { 00:11:41.337 "name": "pt4", 00:11:41.337 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:41.337 "is_configured": true, 00:11:41.337 "data_offset": 2048, 00:11:41.337 "data_size": 63488 00:11:41.337 } 00:11:41.337 ] 00:11:41.337 }' 00:11:41.337 13:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:41.337 13:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.908 13:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:41.908 13:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.908 13:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.908 [2024-11-17 13:21:30.860232] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:41.908 [2024-11-17 13:21:30.860308] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:41.908 [2024-11-17 13:21:30.860408] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:41.908 [2024-11-17 13:21:30.860531] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:41.908 [2024-11-17 13:21:30.860586] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:11:41.908 13:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.908 13:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:11:41.908 13:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:41.908 13:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.908 13:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.908 13:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.908 13:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:11:41.908 13:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:11:41.908 13:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:11:41.908 13:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:11:41.908 13:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:11:41.908 13:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.908 13:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.908 13:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.908 13:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:11:41.908 13:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:11:41.908 13:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:11:41.908 13:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.908 13:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.908 13:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.908 13:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:11:41.908 13:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:11:41.908 13:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:11:41.908 13:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.908 13:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.908 13:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.908 13:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:11:41.908 13:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:11:41.908 13:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:11:41.908 13:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:11:41.908 13:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:41.908 13:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.908 13:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.908 [2024-11-17 13:21:30.940059] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:41.908 [2024-11-17 13:21:30.940143] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:41.908 [2024-11-17 13:21:30.940193] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:11:41.908 [2024-11-17 13:21:30.940234] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:41.908 [2024-11-17 13:21:30.942415] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:41.908 [2024-11-17 13:21:30.942483] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:41.908 [2024-11-17 13:21:30.942583] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:41.908 [2024-11-17 13:21:30.942642] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:41.908 pt2 00:11:41.908 13:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.908 13:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:11:41.908 13:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:41.908 13:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:41.908 13:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:41.908 13:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:41.908 13:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:41.908 13:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:41.908 13:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:41.908 13:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:41.908 13:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:41.908 13:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:41.908 13:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:41.908 13:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.908 13:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.908 13:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.908 13:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:41.908 "name": "raid_bdev1", 00:11:41.908 "uuid": "3bad0d42-a655-4050-b1c8-34cd54ef927e", 00:11:41.908 "strip_size_kb": 0, 00:11:41.908 "state": "configuring", 00:11:41.908 "raid_level": "raid1", 00:11:41.908 "superblock": true, 00:11:41.908 "num_base_bdevs": 4, 00:11:41.908 "num_base_bdevs_discovered": 1, 00:11:41.908 "num_base_bdevs_operational": 3, 00:11:41.908 "base_bdevs_list": [ 00:11:41.908 { 00:11:41.908 "name": null, 00:11:41.908 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:41.908 "is_configured": false, 00:11:41.908 "data_offset": 2048, 00:11:41.908 "data_size": 63488 00:11:41.908 }, 00:11:41.908 { 00:11:41.908 "name": "pt2", 00:11:41.908 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:41.908 "is_configured": true, 00:11:41.908 "data_offset": 2048, 00:11:41.908 "data_size": 63488 00:11:41.908 }, 00:11:41.908 { 00:11:41.908 "name": null, 00:11:41.908 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:41.908 "is_configured": false, 00:11:41.908 "data_offset": 2048, 00:11:41.909 "data_size": 63488 00:11:41.909 }, 00:11:41.909 { 00:11:41.909 "name": null, 00:11:41.909 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:41.909 "is_configured": false, 00:11:41.909 "data_offset": 2048, 00:11:41.909 "data_size": 63488 00:11:41.909 } 00:11:41.909 ] 00:11:41.909 }' 00:11:41.909 13:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:41.909 13:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.169 13:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:11:42.169 13:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:11:42.169 13:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:42.169 13:21:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.169 13:21:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.169 [2024-11-17 13:21:31.367349] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:42.169 [2024-11-17 13:21:31.367439] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:42.169 [2024-11-17 13:21:31.367474] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:11:42.169 [2024-11-17 13:21:31.367501] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:42.169 [2024-11-17 13:21:31.367972] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:42.169 [2024-11-17 13:21:31.368027] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:42.169 [2024-11-17 13:21:31.368138] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:42.169 [2024-11-17 13:21:31.368187] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:42.169 pt3 00:11:42.169 13:21:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.169 13:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:11:42.169 13:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:42.169 13:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:42.169 13:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:42.169 13:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:42.169 13:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:42.169 13:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:42.169 13:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:42.169 13:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:42.169 13:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:42.169 13:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:42.169 13:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:42.169 13:21:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.169 13:21:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.428 13:21:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.428 13:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:42.428 "name": "raid_bdev1", 00:11:42.428 "uuid": "3bad0d42-a655-4050-b1c8-34cd54ef927e", 00:11:42.428 "strip_size_kb": 0, 00:11:42.428 "state": "configuring", 00:11:42.428 "raid_level": "raid1", 00:11:42.428 "superblock": true, 00:11:42.428 "num_base_bdevs": 4, 00:11:42.428 "num_base_bdevs_discovered": 2, 00:11:42.428 "num_base_bdevs_operational": 3, 00:11:42.428 "base_bdevs_list": [ 00:11:42.428 { 00:11:42.428 "name": null, 00:11:42.428 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:42.428 "is_configured": false, 00:11:42.428 "data_offset": 2048, 00:11:42.428 "data_size": 63488 00:11:42.428 }, 00:11:42.428 { 00:11:42.428 "name": "pt2", 00:11:42.428 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:42.428 "is_configured": true, 00:11:42.428 "data_offset": 2048, 00:11:42.428 "data_size": 63488 00:11:42.428 }, 00:11:42.428 { 00:11:42.428 "name": "pt3", 00:11:42.428 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:42.428 "is_configured": true, 00:11:42.428 "data_offset": 2048, 00:11:42.428 "data_size": 63488 00:11:42.428 }, 00:11:42.428 { 00:11:42.428 "name": null, 00:11:42.428 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:42.428 "is_configured": false, 00:11:42.428 "data_offset": 2048, 00:11:42.428 "data_size": 63488 00:11:42.428 } 00:11:42.428 ] 00:11:42.428 }' 00:11:42.428 13:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:42.428 13:21:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.688 13:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:11:42.688 13:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:11:42.688 13:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:11:42.688 13:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:42.688 13:21:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.688 13:21:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.688 [2024-11-17 13:21:31.830588] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:42.688 [2024-11-17 13:21:31.830651] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:42.688 [2024-11-17 13:21:31.830673] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:11:42.688 [2024-11-17 13:21:31.830682] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:42.688 [2024-11-17 13:21:31.831134] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:42.688 [2024-11-17 13:21:31.831165] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:42.688 [2024-11-17 13:21:31.831266] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:11:42.688 [2024-11-17 13:21:31.831299] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:42.688 [2024-11-17 13:21:31.831440] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:42.688 [2024-11-17 13:21:31.831448] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:42.688 [2024-11-17 13:21:31.831713] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:11:42.688 [2024-11-17 13:21:31.831876] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:42.688 [2024-11-17 13:21:31.831890] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:11:42.688 [2024-11-17 13:21:31.832018] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:42.688 pt4 00:11:42.688 13:21:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.688 13:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:42.688 13:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:42.688 13:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:42.688 13:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:42.688 13:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:42.688 13:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:42.688 13:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:42.688 13:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:42.688 13:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:42.688 13:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:42.689 13:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:42.689 13:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:42.689 13:21:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.689 13:21:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.689 13:21:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.689 13:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:42.689 "name": "raid_bdev1", 00:11:42.689 "uuid": "3bad0d42-a655-4050-b1c8-34cd54ef927e", 00:11:42.689 "strip_size_kb": 0, 00:11:42.689 "state": "online", 00:11:42.689 "raid_level": "raid1", 00:11:42.689 "superblock": true, 00:11:42.689 "num_base_bdevs": 4, 00:11:42.689 "num_base_bdevs_discovered": 3, 00:11:42.689 "num_base_bdevs_operational": 3, 00:11:42.689 "base_bdevs_list": [ 00:11:42.689 { 00:11:42.689 "name": null, 00:11:42.689 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:42.689 "is_configured": false, 00:11:42.689 "data_offset": 2048, 00:11:42.689 "data_size": 63488 00:11:42.689 }, 00:11:42.689 { 00:11:42.689 "name": "pt2", 00:11:42.689 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:42.689 "is_configured": true, 00:11:42.689 "data_offset": 2048, 00:11:42.689 "data_size": 63488 00:11:42.689 }, 00:11:42.689 { 00:11:42.689 "name": "pt3", 00:11:42.689 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:42.689 "is_configured": true, 00:11:42.689 "data_offset": 2048, 00:11:42.689 "data_size": 63488 00:11:42.689 }, 00:11:42.689 { 00:11:42.689 "name": "pt4", 00:11:42.689 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:42.689 "is_configured": true, 00:11:42.689 "data_offset": 2048, 00:11:42.689 "data_size": 63488 00:11:42.689 } 00:11:42.689 ] 00:11:42.689 }' 00:11:42.689 13:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:42.689 13:21:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.259 13:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:43.259 13:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.259 13:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.259 [2024-11-17 13:21:32.245848] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:43.259 [2024-11-17 13:21:32.245940] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:43.259 [2024-11-17 13:21:32.246066] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:43.259 [2024-11-17 13:21:32.246182] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:43.259 [2024-11-17 13:21:32.246273] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:11:43.259 13:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.259 13:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:43.259 13:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.259 13:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.259 13:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:11:43.259 13:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.259 13:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:11:43.259 13:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:11:43.259 13:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:11:43.259 13:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:11:43.259 13:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:11:43.259 13:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.259 13:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.259 13:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.259 13:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:43.259 13:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.259 13:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.259 [2024-11-17 13:21:32.309712] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:43.259 [2024-11-17 13:21:32.309815] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:43.259 [2024-11-17 13:21:32.309853] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:11:43.259 [2024-11-17 13:21:32.309889] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:43.259 [2024-11-17 13:21:32.312080] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:43.259 [2024-11-17 13:21:32.312156] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:43.259 [2024-11-17 13:21:32.312273] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:43.259 [2024-11-17 13:21:32.312371] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:43.259 [2024-11-17 13:21:32.312542] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:11:43.259 [2024-11-17 13:21:32.312595] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:43.259 [2024-11-17 13:21:32.312668] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:11:43.259 [2024-11-17 13:21:32.312807] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:43.259 [2024-11-17 13:21:32.312955] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:43.259 pt1 00:11:43.259 13:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.259 13:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:11:43.259 13:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:11:43.259 13:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:43.259 13:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:43.259 13:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:43.259 13:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:43.259 13:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:43.259 13:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:43.259 13:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:43.259 13:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:43.259 13:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:43.259 13:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:43.259 13:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:43.259 13:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.259 13:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.259 13:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.259 13:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:43.259 "name": "raid_bdev1", 00:11:43.259 "uuid": "3bad0d42-a655-4050-b1c8-34cd54ef927e", 00:11:43.259 "strip_size_kb": 0, 00:11:43.259 "state": "configuring", 00:11:43.259 "raid_level": "raid1", 00:11:43.259 "superblock": true, 00:11:43.259 "num_base_bdevs": 4, 00:11:43.259 "num_base_bdevs_discovered": 2, 00:11:43.259 "num_base_bdevs_operational": 3, 00:11:43.259 "base_bdevs_list": [ 00:11:43.259 { 00:11:43.259 "name": null, 00:11:43.259 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:43.259 "is_configured": false, 00:11:43.259 "data_offset": 2048, 00:11:43.259 "data_size": 63488 00:11:43.259 }, 00:11:43.259 { 00:11:43.259 "name": "pt2", 00:11:43.259 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:43.259 "is_configured": true, 00:11:43.259 "data_offset": 2048, 00:11:43.259 "data_size": 63488 00:11:43.259 }, 00:11:43.259 { 00:11:43.259 "name": "pt3", 00:11:43.259 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:43.259 "is_configured": true, 00:11:43.259 "data_offset": 2048, 00:11:43.259 "data_size": 63488 00:11:43.259 }, 00:11:43.259 { 00:11:43.259 "name": null, 00:11:43.259 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:43.259 "is_configured": false, 00:11:43.259 "data_offset": 2048, 00:11:43.259 "data_size": 63488 00:11:43.259 } 00:11:43.259 ] 00:11:43.259 }' 00:11:43.259 13:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:43.259 13:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.828 13:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:11:43.828 13:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.828 13:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.828 13:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:11:43.828 13:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.828 13:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:11:43.828 13:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:43.828 13:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.828 13:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.828 [2024-11-17 13:21:32.832890] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:43.828 [2024-11-17 13:21:32.832954] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:43.828 [2024-11-17 13:21:32.832976] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:11:43.828 [2024-11-17 13:21:32.832985] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:43.828 [2024-11-17 13:21:32.833464] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:43.828 [2024-11-17 13:21:32.833490] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:43.828 [2024-11-17 13:21:32.833584] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:11:43.828 [2024-11-17 13:21:32.833616] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:43.829 [2024-11-17 13:21:32.833755] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:11:43.829 [2024-11-17 13:21:32.833763] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:43.829 [2024-11-17 13:21:32.834014] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:11:43.829 [2024-11-17 13:21:32.834181] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:11:43.829 [2024-11-17 13:21:32.834202] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:11:43.829 [2024-11-17 13:21:32.834369] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:43.829 pt4 00:11:43.829 13:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.829 13:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:43.829 13:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:43.829 13:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:43.829 13:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:43.829 13:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:43.829 13:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:43.829 13:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:43.829 13:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:43.829 13:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:43.829 13:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:43.829 13:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:43.829 13:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:43.829 13:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.829 13:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.829 13:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.829 13:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:43.829 "name": "raid_bdev1", 00:11:43.829 "uuid": "3bad0d42-a655-4050-b1c8-34cd54ef927e", 00:11:43.829 "strip_size_kb": 0, 00:11:43.829 "state": "online", 00:11:43.829 "raid_level": "raid1", 00:11:43.829 "superblock": true, 00:11:43.829 "num_base_bdevs": 4, 00:11:43.829 "num_base_bdevs_discovered": 3, 00:11:43.829 "num_base_bdevs_operational": 3, 00:11:43.829 "base_bdevs_list": [ 00:11:43.829 { 00:11:43.829 "name": null, 00:11:43.829 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:43.829 "is_configured": false, 00:11:43.829 "data_offset": 2048, 00:11:43.829 "data_size": 63488 00:11:43.829 }, 00:11:43.829 { 00:11:43.829 "name": "pt2", 00:11:43.829 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:43.829 "is_configured": true, 00:11:43.829 "data_offset": 2048, 00:11:43.829 "data_size": 63488 00:11:43.829 }, 00:11:43.829 { 00:11:43.829 "name": "pt3", 00:11:43.829 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:43.829 "is_configured": true, 00:11:43.829 "data_offset": 2048, 00:11:43.829 "data_size": 63488 00:11:43.829 }, 00:11:43.829 { 00:11:43.829 "name": "pt4", 00:11:43.829 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:43.829 "is_configured": true, 00:11:43.829 "data_offset": 2048, 00:11:43.829 "data_size": 63488 00:11:43.829 } 00:11:43.829 ] 00:11:43.829 }' 00:11:43.829 13:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:43.829 13:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.089 13:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:11:44.089 13:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.089 13:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.089 13:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:11:44.089 13:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.089 13:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:11:44.089 13:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:44.089 13:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:11:44.089 13:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.089 13:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.089 [2024-11-17 13:21:33.312361] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:44.349 13:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.349 13:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 3bad0d42-a655-4050-b1c8-34cd54ef927e '!=' 3bad0d42-a655-4050-b1c8-34cd54ef927e ']' 00:11:44.349 13:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 74425 00:11:44.349 13:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 74425 ']' 00:11:44.349 13:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 74425 00:11:44.349 13:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:11:44.349 13:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:44.349 13:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74425 00:11:44.349 13:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:44.349 killing process with pid 74425 00:11:44.349 13:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:44.349 13:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74425' 00:11:44.349 13:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 74425 00:11:44.349 [2024-11-17 13:21:33.396555] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:44.349 [2024-11-17 13:21:33.396643] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:44.349 [2024-11-17 13:21:33.396714] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:44.349 [2024-11-17 13:21:33.396726] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:11:44.349 13:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 74425 00:11:44.608 [2024-11-17 13:21:33.781220] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:45.990 13:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:11:45.990 00:11:45.990 real 0m8.336s 00:11:45.990 user 0m13.043s 00:11:45.990 sys 0m1.579s 00:11:45.990 ************************************ 00:11:45.990 END TEST raid_superblock_test 00:11:45.990 ************************************ 00:11:45.990 13:21:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:45.990 13:21:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.990 13:21:34 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 4 read 00:11:45.990 13:21:34 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:45.990 13:21:34 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:45.990 13:21:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:45.990 ************************************ 00:11:45.990 START TEST raid_read_error_test 00:11:45.990 ************************************ 00:11:45.990 13:21:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 4 read 00:11:45.990 13:21:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:11:45.990 13:21:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:45.990 13:21:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:11:45.990 13:21:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:45.990 13:21:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:45.990 13:21:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:45.990 13:21:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:45.990 13:21:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:45.990 13:21:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:45.990 13:21:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:45.990 13:21:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:45.990 13:21:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:45.990 13:21:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:45.990 13:21:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:45.990 13:21:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:45.990 13:21:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:45.990 13:21:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:45.990 13:21:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:45.990 13:21:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:45.990 13:21:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:45.990 13:21:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:45.990 13:21:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:45.990 13:21:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:45.990 13:21:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:45.990 13:21:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:11:45.990 13:21:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:11:45.990 13:21:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:45.990 13:21:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.ygTuqjk1lx 00:11:45.990 13:21:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=74918 00:11:45.990 13:21:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:45.990 13:21:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 74918 00:11:45.990 13:21:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 74918 ']' 00:11:45.990 13:21:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:45.990 13:21:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:45.990 13:21:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:45.990 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:45.990 13:21:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:45.990 13:21:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.990 [2024-11-17 13:21:35.021138] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:11:45.990 [2024-11-17 13:21:35.021395] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74918 ] 00:11:45.990 [2024-11-17 13:21:35.199901] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:46.250 [2024-11-17 13:21:35.331306] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:46.510 [2024-11-17 13:21:35.539883] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:46.510 [2024-11-17 13:21:35.540011] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:46.769 13:21:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:46.769 13:21:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:46.769 13:21:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:46.769 13:21:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:46.769 13:21:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.769 13:21:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.769 BaseBdev1_malloc 00:11:46.769 13:21:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.769 13:21:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:46.769 13:21:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.769 13:21:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.769 true 00:11:46.769 13:21:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.769 13:21:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:46.769 13:21:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.769 13:21:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.769 [2024-11-17 13:21:35.959711] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:46.769 [2024-11-17 13:21:35.959773] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:46.769 [2024-11-17 13:21:35.959793] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:46.769 [2024-11-17 13:21:35.959804] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:46.769 [2024-11-17 13:21:35.962083] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:46.769 [2024-11-17 13:21:35.962191] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:46.769 BaseBdev1 00:11:46.769 13:21:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.769 13:21:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:46.769 13:21:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:46.769 13:21:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.769 13:21:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.030 BaseBdev2_malloc 00:11:47.030 13:21:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.030 13:21:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:47.030 13:21:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.030 13:21:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.030 true 00:11:47.030 13:21:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.030 13:21:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:47.030 13:21:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.030 13:21:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.030 [2024-11-17 13:21:36.028395] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:47.030 [2024-11-17 13:21:36.028455] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:47.030 [2024-11-17 13:21:36.028472] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:47.030 [2024-11-17 13:21:36.028483] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:47.030 [2024-11-17 13:21:36.030601] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:47.030 [2024-11-17 13:21:36.030711] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:47.030 BaseBdev2 00:11:47.030 13:21:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.030 13:21:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:47.030 13:21:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:47.030 13:21:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.030 13:21:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.030 BaseBdev3_malloc 00:11:47.030 13:21:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.030 13:21:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:47.030 13:21:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.030 13:21:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.030 true 00:11:47.030 13:21:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.030 13:21:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:47.030 13:21:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.030 13:21:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.030 [2024-11-17 13:21:36.105975] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:47.030 [2024-11-17 13:21:36.106097] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:47.030 [2024-11-17 13:21:36.106117] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:47.030 [2024-11-17 13:21:36.106128] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:47.030 [2024-11-17 13:21:36.108176] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:47.030 [2024-11-17 13:21:36.108227] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:47.030 BaseBdev3 00:11:47.030 13:21:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.030 13:21:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:47.030 13:21:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:47.030 13:21:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.030 13:21:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.030 BaseBdev4_malloc 00:11:47.030 13:21:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.030 13:21:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:47.030 13:21:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.030 13:21:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.030 true 00:11:47.030 13:21:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.030 13:21:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:47.030 13:21:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.030 13:21:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.030 [2024-11-17 13:21:36.173455] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:47.030 [2024-11-17 13:21:36.173512] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:47.030 [2024-11-17 13:21:36.173528] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:47.030 [2024-11-17 13:21:36.173538] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:47.030 [2024-11-17 13:21:36.175549] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:47.030 [2024-11-17 13:21:36.175587] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:47.030 BaseBdev4 00:11:47.030 13:21:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.030 13:21:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:47.030 13:21:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.030 13:21:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.030 [2024-11-17 13:21:36.185501] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:47.030 [2024-11-17 13:21:36.187310] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:47.030 [2024-11-17 13:21:36.187379] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:47.030 [2024-11-17 13:21:36.187439] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:47.030 [2024-11-17 13:21:36.187665] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:11:47.030 [2024-11-17 13:21:36.187687] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:47.030 [2024-11-17 13:21:36.187951] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:11:47.030 [2024-11-17 13:21:36.188105] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:11:47.030 [2024-11-17 13:21:36.188115] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:11:47.030 [2024-11-17 13:21:36.188276] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:47.030 13:21:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.030 13:21:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:11:47.030 13:21:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:47.030 13:21:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:47.030 13:21:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:47.030 13:21:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:47.030 13:21:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:47.030 13:21:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:47.030 13:21:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:47.030 13:21:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:47.030 13:21:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:47.030 13:21:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:47.030 13:21:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.030 13:21:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.030 13:21:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:47.030 13:21:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.030 13:21:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:47.030 "name": "raid_bdev1", 00:11:47.030 "uuid": "f046b62c-1773-4021-8fea-55fd176e0483", 00:11:47.030 "strip_size_kb": 0, 00:11:47.030 "state": "online", 00:11:47.030 "raid_level": "raid1", 00:11:47.030 "superblock": true, 00:11:47.030 "num_base_bdevs": 4, 00:11:47.030 "num_base_bdevs_discovered": 4, 00:11:47.030 "num_base_bdevs_operational": 4, 00:11:47.030 "base_bdevs_list": [ 00:11:47.030 { 00:11:47.030 "name": "BaseBdev1", 00:11:47.030 "uuid": "ea9e0aa4-9ec2-5d27-9452-e25c036f1ab8", 00:11:47.030 "is_configured": true, 00:11:47.030 "data_offset": 2048, 00:11:47.030 "data_size": 63488 00:11:47.030 }, 00:11:47.030 { 00:11:47.030 "name": "BaseBdev2", 00:11:47.030 "uuid": "d6e5239c-67d5-54bb-9c84-dd5301704b3a", 00:11:47.030 "is_configured": true, 00:11:47.030 "data_offset": 2048, 00:11:47.030 "data_size": 63488 00:11:47.030 }, 00:11:47.030 { 00:11:47.030 "name": "BaseBdev3", 00:11:47.030 "uuid": "5f2495a5-622d-561f-908e-89e652e5abe8", 00:11:47.031 "is_configured": true, 00:11:47.031 "data_offset": 2048, 00:11:47.031 "data_size": 63488 00:11:47.031 }, 00:11:47.031 { 00:11:47.031 "name": "BaseBdev4", 00:11:47.031 "uuid": "fa3b9011-0b7c-5545-b6a3-aab5426da2e8", 00:11:47.031 "is_configured": true, 00:11:47.031 "data_offset": 2048, 00:11:47.031 "data_size": 63488 00:11:47.031 } 00:11:47.031 ] 00:11:47.031 }' 00:11:47.031 13:21:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:47.031 13:21:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.600 13:21:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:47.600 13:21:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:47.600 [2024-11-17 13:21:36.729727] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:11:48.538 13:21:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:11:48.538 13:21:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.538 13:21:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.538 13:21:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.538 13:21:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:48.538 13:21:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:11:48.538 13:21:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:11:48.538 13:21:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:11:48.538 13:21:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:11:48.538 13:21:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:48.538 13:21:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:48.538 13:21:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:48.538 13:21:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:48.538 13:21:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:48.538 13:21:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:48.538 13:21:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:48.538 13:21:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:48.538 13:21:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:48.538 13:21:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.538 13:21:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:48.538 13:21:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.538 13:21:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.538 13:21:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.538 13:21:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:48.538 "name": "raid_bdev1", 00:11:48.538 "uuid": "f046b62c-1773-4021-8fea-55fd176e0483", 00:11:48.538 "strip_size_kb": 0, 00:11:48.538 "state": "online", 00:11:48.538 "raid_level": "raid1", 00:11:48.538 "superblock": true, 00:11:48.538 "num_base_bdevs": 4, 00:11:48.538 "num_base_bdevs_discovered": 4, 00:11:48.538 "num_base_bdevs_operational": 4, 00:11:48.538 "base_bdevs_list": [ 00:11:48.538 { 00:11:48.538 "name": "BaseBdev1", 00:11:48.538 "uuid": "ea9e0aa4-9ec2-5d27-9452-e25c036f1ab8", 00:11:48.538 "is_configured": true, 00:11:48.538 "data_offset": 2048, 00:11:48.538 "data_size": 63488 00:11:48.538 }, 00:11:48.538 { 00:11:48.538 "name": "BaseBdev2", 00:11:48.538 "uuid": "d6e5239c-67d5-54bb-9c84-dd5301704b3a", 00:11:48.538 "is_configured": true, 00:11:48.538 "data_offset": 2048, 00:11:48.538 "data_size": 63488 00:11:48.538 }, 00:11:48.538 { 00:11:48.538 "name": "BaseBdev3", 00:11:48.538 "uuid": "5f2495a5-622d-561f-908e-89e652e5abe8", 00:11:48.538 "is_configured": true, 00:11:48.538 "data_offset": 2048, 00:11:48.538 "data_size": 63488 00:11:48.538 }, 00:11:48.538 { 00:11:48.538 "name": "BaseBdev4", 00:11:48.538 "uuid": "fa3b9011-0b7c-5545-b6a3-aab5426da2e8", 00:11:48.538 "is_configured": true, 00:11:48.538 "data_offset": 2048, 00:11:48.538 "data_size": 63488 00:11:48.538 } 00:11:48.538 ] 00:11:48.538 }' 00:11:48.538 13:21:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:48.538 13:21:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.108 13:21:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:49.108 13:21:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.108 13:21:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.108 [2024-11-17 13:21:38.121574] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:49.108 [2024-11-17 13:21:38.121742] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:49.108 [2024-11-17 13:21:38.125035] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:49.108 [2024-11-17 13:21:38.125138] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:49.108 [2024-11-17 13:21:38.125379] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:49.108 [2024-11-17 13:21:38.125445] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:11:49.108 { 00:11:49.108 "results": [ 00:11:49.108 { 00:11:49.108 "job": "raid_bdev1", 00:11:49.108 "core_mask": "0x1", 00:11:49.108 "workload": "randrw", 00:11:49.108 "percentage": 50, 00:11:49.108 "status": "finished", 00:11:49.108 "queue_depth": 1, 00:11:49.108 "io_size": 131072, 00:11:49.108 "runtime": 1.392909, 00:11:49.108 "iops": 9661.076208137072, 00:11:49.108 "mibps": 1207.634526017134, 00:11:49.108 "io_failed": 0, 00:11:49.108 "io_timeout": 0, 00:11:49.108 "avg_latency_us": 100.45221626185686, 00:11:49.108 "min_latency_us": 22.022707423580787, 00:11:49.108 "max_latency_us": 1574.0087336244542 00:11:49.108 } 00:11:49.108 ], 00:11:49.108 "core_count": 1 00:11:49.108 } 00:11:49.109 13:21:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.109 13:21:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 74918 00:11:49.109 13:21:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 74918 ']' 00:11:49.109 13:21:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 74918 00:11:49.109 13:21:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:11:49.109 13:21:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:49.109 13:21:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74918 00:11:49.109 13:21:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:49.109 13:21:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:49.109 killing process with pid 74918 00:11:49.109 13:21:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74918' 00:11:49.109 13:21:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 74918 00:11:49.109 [2024-11-17 13:21:38.176404] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:49.109 13:21:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 74918 00:11:49.368 [2024-11-17 13:21:38.532948] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:50.750 13:21:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.ygTuqjk1lx 00:11:50.750 13:21:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:50.750 13:21:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:50.750 13:21:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:11:50.750 13:21:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:11:50.750 13:21:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:50.750 13:21:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:50.751 13:21:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:11:50.751 00:11:50.751 real 0m4.964s 00:11:50.751 user 0m5.845s 00:11:50.751 sys 0m0.611s 00:11:50.751 13:21:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:50.751 ************************************ 00:11:50.751 END TEST raid_read_error_test 00:11:50.751 ************************************ 00:11:50.751 13:21:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.751 13:21:39 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 4 write 00:11:50.751 13:21:39 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:50.751 13:21:39 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:50.751 13:21:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:50.751 ************************************ 00:11:50.751 START TEST raid_write_error_test 00:11:50.751 ************************************ 00:11:50.751 13:21:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 4 write 00:11:50.751 13:21:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:11:50.751 13:21:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:50.751 13:21:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:11:50.751 13:21:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:50.751 13:21:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:50.751 13:21:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:50.751 13:21:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:50.751 13:21:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:50.751 13:21:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:50.751 13:21:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:50.751 13:21:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:50.751 13:21:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:50.751 13:21:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:50.751 13:21:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:50.751 13:21:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:50.751 13:21:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:50.751 13:21:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:50.751 13:21:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:50.751 13:21:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:51.011 13:21:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:51.011 13:21:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:51.011 13:21:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:51.011 13:21:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:51.011 13:21:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:51.011 13:21:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:11:51.011 13:21:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:11:51.011 13:21:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:51.011 13:21:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.9J8Ci5lmhd 00:11:51.011 13:21:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=75058 00:11:51.011 13:21:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:51.011 13:21:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 75058 00:11:51.011 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:51.011 13:21:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 75058 ']' 00:11:51.011 13:21:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:51.011 13:21:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:51.011 13:21:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:51.011 13:21:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:51.011 13:21:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.011 [2024-11-17 13:21:40.089800] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:11:51.011 [2024-11-17 13:21:40.089954] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75058 ] 00:11:51.270 [2024-11-17 13:21:40.277148] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:51.270 [2024-11-17 13:21:40.438173] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:51.530 [2024-11-17 13:21:40.701739] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:51.530 [2024-11-17 13:21:40.701823] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:51.789 13:21:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:51.789 13:21:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:51.789 13:21:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:51.789 13:21:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:51.789 13:21:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.789 13:21:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.049 BaseBdev1_malloc 00:11:52.049 13:21:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.049 13:21:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:52.049 13:21:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.049 13:21:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.049 true 00:11:52.049 13:21:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.049 13:21:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:52.049 13:21:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.049 13:21:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.049 [2024-11-17 13:21:41.035958] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:52.049 [2024-11-17 13:21:41.036030] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:52.049 [2024-11-17 13:21:41.036059] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:52.049 [2024-11-17 13:21:41.036075] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:52.049 [2024-11-17 13:21:41.038834] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:52.049 [2024-11-17 13:21:41.038951] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:52.049 BaseBdev1 00:11:52.049 13:21:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.049 13:21:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:52.049 13:21:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:52.049 13:21:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.049 13:21:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.049 BaseBdev2_malloc 00:11:52.049 13:21:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.049 13:21:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:52.049 13:21:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.049 13:21:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.049 true 00:11:52.049 13:21:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.049 13:21:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:52.049 13:21:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.049 13:21:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.049 [2024-11-17 13:21:41.114862] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:52.049 [2024-11-17 13:21:41.114928] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:52.049 [2024-11-17 13:21:41.114949] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:52.049 [2024-11-17 13:21:41.114962] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:52.049 [2024-11-17 13:21:41.117564] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:52.049 [2024-11-17 13:21:41.117606] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:52.049 BaseBdev2 00:11:52.049 13:21:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.049 13:21:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:52.049 13:21:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:52.049 13:21:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.049 13:21:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.049 BaseBdev3_malloc 00:11:52.049 13:21:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.049 13:21:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:52.050 13:21:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.050 13:21:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.050 true 00:11:52.050 13:21:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.050 13:21:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:52.050 13:21:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.050 13:21:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.050 [2024-11-17 13:21:41.208047] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:52.050 [2024-11-17 13:21:41.208127] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:52.050 [2024-11-17 13:21:41.208149] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:52.050 [2024-11-17 13:21:41.208161] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:52.050 [2024-11-17 13:21:41.210931] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:52.050 [2024-11-17 13:21:41.210972] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:52.050 BaseBdev3 00:11:52.050 13:21:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.050 13:21:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:52.050 13:21:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:52.050 13:21:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.050 13:21:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.050 BaseBdev4_malloc 00:11:52.050 13:21:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.050 13:21:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:52.050 13:21:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.050 13:21:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.310 true 00:11:52.310 13:21:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.310 13:21:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:52.310 13:21:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.310 13:21:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.310 [2024-11-17 13:21:41.285651] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:52.310 [2024-11-17 13:21:41.285721] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:52.310 [2024-11-17 13:21:41.285745] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:52.310 [2024-11-17 13:21:41.285759] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:52.310 [2024-11-17 13:21:41.288479] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:52.310 [2024-11-17 13:21:41.288530] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:52.310 BaseBdev4 00:11:52.310 13:21:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.310 13:21:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:52.310 13:21:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.310 13:21:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.310 [2024-11-17 13:21:41.297698] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:52.310 [2024-11-17 13:21:41.299957] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:52.310 [2024-11-17 13:21:41.300033] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:52.310 [2024-11-17 13:21:41.300097] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:52.310 [2024-11-17 13:21:41.300350] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:11:52.310 [2024-11-17 13:21:41.300366] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:52.310 [2024-11-17 13:21:41.300648] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:11:52.310 [2024-11-17 13:21:41.300897] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:11:52.310 [2024-11-17 13:21:41.300911] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:11:52.310 [2024-11-17 13:21:41.301112] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:52.310 13:21:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.310 13:21:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:11:52.310 13:21:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:52.310 13:21:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:52.310 13:21:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:52.310 13:21:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:52.310 13:21:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:52.310 13:21:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:52.310 13:21:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:52.310 13:21:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:52.310 13:21:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:52.310 13:21:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:52.311 13:21:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.311 13:21:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:52.311 13:21:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.311 13:21:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.311 13:21:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:52.311 "name": "raid_bdev1", 00:11:52.311 "uuid": "b022b12f-bcc5-43ae-ae81-2d163f07a6eb", 00:11:52.311 "strip_size_kb": 0, 00:11:52.311 "state": "online", 00:11:52.311 "raid_level": "raid1", 00:11:52.311 "superblock": true, 00:11:52.311 "num_base_bdevs": 4, 00:11:52.311 "num_base_bdevs_discovered": 4, 00:11:52.311 "num_base_bdevs_operational": 4, 00:11:52.311 "base_bdevs_list": [ 00:11:52.311 { 00:11:52.311 "name": "BaseBdev1", 00:11:52.311 "uuid": "a450b641-ed54-5003-9688-22566aa021aa", 00:11:52.311 "is_configured": true, 00:11:52.311 "data_offset": 2048, 00:11:52.311 "data_size": 63488 00:11:52.311 }, 00:11:52.311 { 00:11:52.311 "name": "BaseBdev2", 00:11:52.311 "uuid": "f594d6c8-2fd9-53b8-b6e2-38ac28dc094e", 00:11:52.311 "is_configured": true, 00:11:52.311 "data_offset": 2048, 00:11:52.311 "data_size": 63488 00:11:52.311 }, 00:11:52.311 { 00:11:52.311 "name": "BaseBdev3", 00:11:52.311 "uuid": "1ab1ec5d-b451-5258-b69c-7622e4722ced", 00:11:52.311 "is_configured": true, 00:11:52.311 "data_offset": 2048, 00:11:52.311 "data_size": 63488 00:11:52.311 }, 00:11:52.311 { 00:11:52.311 "name": "BaseBdev4", 00:11:52.311 "uuid": "00c8c793-618e-5aa6-8b6e-bfcebc015b16", 00:11:52.311 "is_configured": true, 00:11:52.311 "data_offset": 2048, 00:11:52.311 "data_size": 63488 00:11:52.311 } 00:11:52.311 ] 00:11:52.311 }' 00:11:52.311 13:21:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:52.311 13:21:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.880 13:21:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:52.880 13:21:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:52.880 [2024-11-17 13:21:41.906448] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:11:53.820 13:21:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:11:53.820 13:21:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.820 13:21:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.820 [2024-11-17 13:21:42.809036] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:11:53.820 [2024-11-17 13:21:42.809116] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:53.820 [2024-11-17 13:21:42.809391] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006a40 00:11:53.820 13:21:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.820 13:21:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:53.820 13:21:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:11:53.820 13:21:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:11:53.820 13:21:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:11:53.820 13:21:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:53.820 13:21:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:53.820 13:21:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:53.820 13:21:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:53.820 13:21:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:53.820 13:21:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:53.820 13:21:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:53.820 13:21:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:53.820 13:21:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:53.820 13:21:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:53.820 13:21:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:53.820 13:21:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:53.820 13:21:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.820 13:21:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.820 13:21:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.820 13:21:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:53.820 "name": "raid_bdev1", 00:11:53.820 "uuid": "b022b12f-bcc5-43ae-ae81-2d163f07a6eb", 00:11:53.820 "strip_size_kb": 0, 00:11:53.820 "state": "online", 00:11:53.820 "raid_level": "raid1", 00:11:53.820 "superblock": true, 00:11:53.820 "num_base_bdevs": 4, 00:11:53.820 "num_base_bdevs_discovered": 3, 00:11:53.820 "num_base_bdevs_operational": 3, 00:11:53.820 "base_bdevs_list": [ 00:11:53.820 { 00:11:53.820 "name": null, 00:11:53.820 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:53.820 "is_configured": false, 00:11:53.820 "data_offset": 0, 00:11:53.820 "data_size": 63488 00:11:53.820 }, 00:11:53.820 { 00:11:53.820 "name": "BaseBdev2", 00:11:53.821 "uuid": "f594d6c8-2fd9-53b8-b6e2-38ac28dc094e", 00:11:53.821 "is_configured": true, 00:11:53.821 "data_offset": 2048, 00:11:53.821 "data_size": 63488 00:11:53.821 }, 00:11:53.821 { 00:11:53.821 "name": "BaseBdev3", 00:11:53.821 "uuid": "1ab1ec5d-b451-5258-b69c-7622e4722ced", 00:11:53.821 "is_configured": true, 00:11:53.821 "data_offset": 2048, 00:11:53.821 "data_size": 63488 00:11:53.821 }, 00:11:53.821 { 00:11:53.821 "name": "BaseBdev4", 00:11:53.821 "uuid": "00c8c793-618e-5aa6-8b6e-bfcebc015b16", 00:11:53.821 "is_configured": true, 00:11:53.821 "data_offset": 2048, 00:11:53.821 "data_size": 63488 00:11:53.821 } 00:11:53.821 ] 00:11:53.821 }' 00:11:53.821 13:21:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:53.821 13:21:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.081 13:21:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:54.081 13:21:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.081 13:21:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.081 [2024-11-17 13:21:43.216791] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:54.081 [2024-11-17 13:21:43.216883] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:54.081 [2024-11-17 13:21:43.219729] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:54.081 [2024-11-17 13:21:43.219815] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:54.081 [2024-11-17 13:21:43.219930] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:54.081 [2024-11-17 13:21:43.219941] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:11:54.081 { 00:11:54.081 "results": [ 00:11:54.081 { 00:11:54.081 "job": "raid_bdev1", 00:11:54.081 "core_mask": "0x1", 00:11:54.081 "workload": "randrw", 00:11:54.081 "percentage": 50, 00:11:54.081 "status": "finished", 00:11:54.081 "queue_depth": 1, 00:11:54.081 "io_size": 131072, 00:11:54.081 "runtime": 1.31076, 00:11:54.081 "iops": 10475.601940858738, 00:11:54.081 "mibps": 1309.4502426073423, 00:11:54.081 "io_failed": 0, 00:11:54.081 "io_timeout": 0, 00:11:54.081 "avg_latency_us": 92.4398228087466, 00:11:54.081 "min_latency_us": 24.034934497816593, 00:11:54.081 "max_latency_us": 1359.3711790393013 00:11:54.081 } 00:11:54.081 ], 00:11:54.081 "core_count": 1 00:11:54.081 } 00:11:54.081 13:21:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.081 13:21:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 75058 00:11:54.081 13:21:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 75058 ']' 00:11:54.081 13:21:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 75058 00:11:54.081 13:21:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:11:54.081 13:21:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:54.081 13:21:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75058 00:11:54.081 killing process with pid 75058 00:11:54.081 13:21:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:54.081 13:21:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:54.081 13:21:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75058' 00:11:54.081 13:21:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 75058 00:11:54.081 [2024-11-17 13:21:43.269634] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:54.081 13:21:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 75058 00:11:54.675 [2024-11-17 13:21:43.606731] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:56.057 13:21:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.9J8Ci5lmhd 00:11:56.057 13:21:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:56.057 13:21:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:56.057 13:21:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:11:56.057 13:21:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:11:56.057 13:21:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:56.057 13:21:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:56.057 13:21:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:11:56.057 00:11:56.057 real 0m4.989s 00:11:56.057 user 0m5.756s 00:11:56.057 sys 0m0.738s 00:11:56.057 13:21:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:56.057 13:21:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.057 ************************************ 00:11:56.057 END TEST raid_write_error_test 00:11:56.057 ************************************ 00:11:56.057 13:21:45 bdev_raid -- bdev/bdev_raid.sh@976 -- # '[' true = true ']' 00:11:56.057 13:21:45 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:11:56.057 13:21:45 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false true 00:11:56.057 13:21:45 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:11:56.057 13:21:45 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:56.057 13:21:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:56.057 ************************************ 00:11:56.057 START TEST raid_rebuild_test 00:11:56.057 ************************************ 00:11:56.057 13:21:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 false false true 00:11:56.057 13:21:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:11:56.057 13:21:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:11:56.057 13:21:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:11:56.057 13:21:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:11:56.057 13:21:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:11:56.057 13:21:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:11:56.057 13:21:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:56.057 13:21:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:11:56.057 13:21:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:56.057 13:21:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:56.057 13:21:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:11:56.057 13:21:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:56.057 13:21:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:56.057 13:21:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:11:56.057 13:21:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:11:56.057 13:21:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:11:56.057 13:21:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:11:56.057 13:21:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:11:56.058 13:21:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:11:56.058 13:21:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:11:56.058 13:21:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:11:56.058 13:21:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:11:56.058 13:21:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:11:56.058 13:21:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=75207 00:11:56.058 13:21:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:11:56.058 13:21:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 75207 00:11:56.058 13:21:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 75207 ']' 00:11:56.058 13:21:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:56.058 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:56.058 13:21:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:56.058 13:21:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:56.058 13:21:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:56.058 13:21:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.058 I/O size of 3145728 is greater than zero copy threshold (65536). 00:11:56.058 Zero copy mechanism will not be used. 00:11:56.058 [2024-11-17 13:21:45.147716] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:11:56.058 [2024-11-17 13:21:45.147846] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75207 ] 00:11:56.318 [2024-11-17 13:21:45.325602] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:56.318 [2024-11-17 13:21:45.464920] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:56.577 [2024-11-17 13:21:45.710768] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:56.577 [2024-11-17 13:21:45.710838] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:57.147 13:21:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:57.147 13:21:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:11:57.147 13:21:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:57.147 13:21:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:57.147 13:21:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.147 13:21:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.147 BaseBdev1_malloc 00:11:57.147 13:21:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.147 13:21:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:11:57.147 13:21:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.147 13:21:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.147 [2024-11-17 13:21:46.124871] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:11:57.147 [2024-11-17 13:21:46.124952] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:57.147 [2024-11-17 13:21:46.124980] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:57.147 [2024-11-17 13:21:46.124994] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:57.147 [2024-11-17 13:21:46.127585] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:57.147 [2024-11-17 13:21:46.127628] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:57.147 BaseBdev1 00:11:57.147 13:21:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.147 13:21:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:57.147 13:21:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:57.147 13:21:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.147 13:21:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.147 BaseBdev2_malloc 00:11:57.147 13:21:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.147 13:21:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:11:57.147 13:21:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.147 13:21:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.147 [2024-11-17 13:21:46.185014] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:11:57.147 [2024-11-17 13:21:46.185089] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:57.147 [2024-11-17 13:21:46.185113] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:57.147 [2024-11-17 13:21:46.185125] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:57.147 [2024-11-17 13:21:46.187624] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:57.147 [2024-11-17 13:21:46.187671] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:57.147 BaseBdev2 00:11:57.147 13:21:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.147 13:21:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:11:57.147 13:21:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.147 13:21:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.147 spare_malloc 00:11:57.147 13:21:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.147 13:21:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:11:57.147 13:21:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.147 13:21:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.147 spare_delay 00:11:57.147 13:21:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.147 13:21:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:11:57.147 13:21:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.147 13:21:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.147 [2024-11-17 13:21:46.271460] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:11:57.147 [2024-11-17 13:21:46.271526] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:57.147 [2024-11-17 13:21:46.271551] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:11:57.147 [2024-11-17 13:21:46.271564] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:57.147 [2024-11-17 13:21:46.274177] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:57.147 [2024-11-17 13:21:46.274240] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:11:57.147 spare 00:11:57.147 13:21:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.147 13:21:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:11:57.147 13:21:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.147 13:21:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.147 [2024-11-17 13:21:46.283497] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:57.147 [2024-11-17 13:21:46.285625] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:57.148 [2024-11-17 13:21:46.285728] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:57.148 [2024-11-17 13:21:46.285743] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:11:57.148 [2024-11-17 13:21:46.286038] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:57.148 [2024-11-17 13:21:46.286277] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:57.148 [2024-11-17 13:21:46.286293] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:11:57.148 [2024-11-17 13:21:46.286489] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:57.148 13:21:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.148 13:21:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:57.148 13:21:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:57.148 13:21:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:57.148 13:21:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:57.148 13:21:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:57.148 13:21:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:57.148 13:21:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:57.148 13:21:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:57.148 13:21:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:57.148 13:21:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:57.148 13:21:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:57.148 13:21:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:57.148 13:21:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.148 13:21:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.148 13:21:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.148 13:21:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:57.148 "name": "raid_bdev1", 00:11:57.148 "uuid": "053746a1-2685-4232-98c1-60c20679a299", 00:11:57.148 "strip_size_kb": 0, 00:11:57.148 "state": "online", 00:11:57.148 "raid_level": "raid1", 00:11:57.148 "superblock": false, 00:11:57.148 "num_base_bdevs": 2, 00:11:57.148 "num_base_bdevs_discovered": 2, 00:11:57.148 "num_base_bdevs_operational": 2, 00:11:57.148 "base_bdevs_list": [ 00:11:57.148 { 00:11:57.148 "name": "BaseBdev1", 00:11:57.148 "uuid": "1a08636b-833e-5dfa-a4b5-e499e5abb96e", 00:11:57.148 "is_configured": true, 00:11:57.148 "data_offset": 0, 00:11:57.148 "data_size": 65536 00:11:57.148 }, 00:11:57.148 { 00:11:57.148 "name": "BaseBdev2", 00:11:57.148 "uuid": "a80e7d14-01c2-5006-9937-d9017b1fabb1", 00:11:57.148 "is_configured": true, 00:11:57.148 "data_offset": 0, 00:11:57.148 "data_size": 65536 00:11:57.148 } 00:11:57.148 ] 00:11:57.148 }' 00:11:57.148 13:21:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:57.148 13:21:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.717 13:21:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:57.718 13:21:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.718 13:21:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.718 13:21:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:11:57.718 [2024-11-17 13:21:46.747069] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:57.718 13:21:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.718 13:21:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:11:57.718 13:21:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:57.718 13:21:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:11:57.718 13:21:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.718 13:21:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.718 13:21:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.718 13:21:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:11:57.718 13:21:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:11:57.718 13:21:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:11:57.718 13:21:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:11:57.718 13:21:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:11:57.718 13:21:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:57.718 13:21:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:11:57.718 13:21:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:57.718 13:21:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:11:57.718 13:21:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:57.718 13:21:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:11:57.718 13:21:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:57.718 13:21:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:57.718 13:21:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:11:57.977 [2024-11-17 13:21:47.074352] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:11:57.977 /dev/nbd0 00:11:57.977 13:21:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:57.977 13:21:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:57.977 13:21:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:11:57.977 13:21:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:11:57.977 13:21:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:57.977 13:21:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:57.977 13:21:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:11:57.977 13:21:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:11:57.977 13:21:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:57.977 13:21:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:57.977 13:21:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:57.977 1+0 records in 00:11:57.977 1+0 records out 00:11:57.977 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000470998 s, 8.7 MB/s 00:11:57.977 13:21:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:57.977 13:21:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:11:57.977 13:21:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:57.977 13:21:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:57.977 13:21:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:11:57.977 13:21:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:57.977 13:21:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:57.977 13:21:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:11:57.977 13:21:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:11:57.977 13:21:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:12:03.255 65536+0 records in 00:12:03.255 65536+0 records out 00:12:03.255 33554432 bytes (34 MB, 32 MiB) copied, 5.17205 s, 6.5 MB/s 00:12:03.255 13:21:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:03.255 13:21:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:03.255 13:21:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:03.255 13:21:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:03.255 13:21:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:12:03.255 13:21:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:03.255 13:21:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:03.515 [2024-11-17 13:21:52.556178] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:03.515 13:21:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:03.515 13:21:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:03.515 13:21:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:03.515 13:21:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:03.515 13:21:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:03.515 13:21:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:03.515 13:21:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:12:03.515 13:21:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:12:03.515 13:21:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:03.515 13:21:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.515 13:21:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.515 [2024-11-17 13:21:52.596219] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:03.515 13:21:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.515 13:21:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:03.515 13:21:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:03.516 13:21:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:03.516 13:21:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:03.516 13:21:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:03.516 13:21:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:03.516 13:21:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:03.516 13:21:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:03.516 13:21:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:03.516 13:21:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:03.516 13:21:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:03.516 13:21:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:03.516 13:21:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.516 13:21:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.516 13:21:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.516 13:21:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:03.516 "name": "raid_bdev1", 00:12:03.516 "uuid": "053746a1-2685-4232-98c1-60c20679a299", 00:12:03.516 "strip_size_kb": 0, 00:12:03.516 "state": "online", 00:12:03.516 "raid_level": "raid1", 00:12:03.516 "superblock": false, 00:12:03.516 "num_base_bdevs": 2, 00:12:03.516 "num_base_bdevs_discovered": 1, 00:12:03.516 "num_base_bdevs_operational": 1, 00:12:03.516 "base_bdevs_list": [ 00:12:03.516 { 00:12:03.516 "name": null, 00:12:03.516 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:03.516 "is_configured": false, 00:12:03.516 "data_offset": 0, 00:12:03.516 "data_size": 65536 00:12:03.516 }, 00:12:03.516 { 00:12:03.516 "name": "BaseBdev2", 00:12:03.516 "uuid": "a80e7d14-01c2-5006-9937-d9017b1fabb1", 00:12:03.516 "is_configured": true, 00:12:03.516 "data_offset": 0, 00:12:03.516 "data_size": 65536 00:12:03.516 } 00:12:03.516 ] 00:12:03.516 }' 00:12:03.516 13:21:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:03.516 13:21:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.086 13:21:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:04.086 13:21:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.086 13:21:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.086 [2024-11-17 13:21:53.119403] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:04.086 [2024-11-17 13:21:53.138792] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09bd0 00:12:04.086 13:21:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.086 13:21:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:04.086 [2024-11-17 13:21:53.141117] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:05.025 13:21:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:05.025 13:21:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:05.025 13:21:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:05.026 13:21:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:05.026 13:21:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:05.026 13:21:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:05.026 13:21:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:05.026 13:21:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.026 13:21:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.026 13:21:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.026 13:21:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:05.026 "name": "raid_bdev1", 00:12:05.026 "uuid": "053746a1-2685-4232-98c1-60c20679a299", 00:12:05.026 "strip_size_kb": 0, 00:12:05.026 "state": "online", 00:12:05.026 "raid_level": "raid1", 00:12:05.026 "superblock": false, 00:12:05.026 "num_base_bdevs": 2, 00:12:05.026 "num_base_bdevs_discovered": 2, 00:12:05.026 "num_base_bdevs_operational": 2, 00:12:05.026 "process": { 00:12:05.026 "type": "rebuild", 00:12:05.026 "target": "spare", 00:12:05.026 "progress": { 00:12:05.026 "blocks": 20480, 00:12:05.026 "percent": 31 00:12:05.026 } 00:12:05.026 }, 00:12:05.026 "base_bdevs_list": [ 00:12:05.026 { 00:12:05.026 "name": "spare", 00:12:05.026 "uuid": "59eb119f-8a89-537a-85cb-4a7e2262a906", 00:12:05.026 "is_configured": true, 00:12:05.026 "data_offset": 0, 00:12:05.026 "data_size": 65536 00:12:05.026 }, 00:12:05.026 { 00:12:05.026 "name": "BaseBdev2", 00:12:05.026 "uuid": "a80e7d14-01c2-5006-9937-d9017b1fabb1", 00:12:05.026 "is_configured": true, 00:12:05.026 "data_offset": 0, 00:12:05.026 "data_size": 65536 00:12:05.026 } 00:12:05.026 ] 00:12:05.026 }' 00:12:05.026 13:21:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:05.026 13:21:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:05.026 13:21:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:05.285 13:21:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:05.285 13:21:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:05.285 13:21:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.285 13:21:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.285 [2024-11-17 13:21:54.292451] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:05.285 [2024-11-17 13:21:54.347580] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:05.285 [2024-11-17 13:21:54.347679] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:05.285 [2024-11-17 13:21:54.347697] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:05.285 [2024-11-17 13:21:54.347708] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:05.285 13:21:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.285 13:21:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:05.285 13:21:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:05.285 13:21:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:05.285 13:21:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:05.285 13:21:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:05.285 13:21:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:05.286 13:21:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:05.286 13:21:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:05.286 13:21:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:05.286 13:21:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:05.286 13:21:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:05.286 13:21:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:05.286 13:21:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.286 13:21:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.286 13:21:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.286 13:21:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:05.286 "name": "raid_bdev1", 00:12:05.286 "uuid": "053746a1-2685-4232-98c1-60c20679a299", 00:12:05.286 "strip_size_kb": 0, 00:12:05.286 "state": "online", 00:12:05.286 "raid_level": "raid1", 00:12:05.286 "superblock": false, 00:12:05.286 "num_base_bdevs": 2, 00:12:05.286 "num_base_bdevs_discovered": 1, 00:12:05.286 "num_base_bdevs_operational": 1, 00:12:05.286 "base_bdevs_list": [ 00:12:05.286 { 00:12:05.286 "name": null, 00:12:05.286 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:05.286 "is_configured": false, 00:12:05.286 "data_offset": 0, 00:12:05.286 "data_size": 65536 00:12:05.286 }, 00:12:05.286 { 00:12:05.286 "name": "BaseBdev2", 00:12:05.286 "uuid": "a80e7d14-01c2-5006-9937-d9017b1fabb1", 00:12:05.286 "is_configured": true, 00:12:05.286 "data_offset": 0, 00:12:05.286 "data_size": 65536 00:12:05.286 } 00:12:05.286 ] 00:12:05.286 }' 00:12:05.286 13:21:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:05.286 13:21:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.854 13:21:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:05.854 13:21:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:05.854 13:21:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:05.854 13:21:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:05.854 13:21:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:05.854 13:21:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:05.854 13:21:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:05.854 13:21:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.854 13:21:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.854 13:21:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.854 13:21:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:05.854 "name": "raid_bdev1", 00:12:05.854 "uuid": "053746a1-2685-4232-98c1-60c20679a299", 00:12:05.854 "strip_size_kb": 0, 00:12:05.854 "state": "online", 00:12:05.854 "raid_level": "raid1", 00:12:05.854 "superblock": false, 00:12:05.854 "num_base_bdevs": 2, 00:12:05.854 "num_base_bdevs_discovered": 1, 00:12:05.854 "num_base_bdevs_operational": 1, 00:12:05.854 "base_bdevs_list": [ 00:12:05.854 { 00:12:05.854 "name": null, 00:12:05.854 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:05.854 "is_configured": false, 00:12:05.854 "data_offset": 0, 00:12:05.854 "data_size": 65536 00:12:05.854 }, 00:12:05.854 { 00:12:05.854 "name": "BaseBdev2", 00:12:05.854 "uuid": "a80e7d14-01c2-5006-9937-d9017b1fabb1", 00:12:05.854 "is_configured": true, 00:12:05.854 "data_offset": 0, 00:12:05.854 "data_size": 65536 00:12:05.854 } 00:12:05.854 ] 00:12:05.854 }' 00:12:05.854 13:21:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:05.854 13:21:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:05.854 13:21:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:05.854 13:21:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:05.854 13:21:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:05.854 13:21:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.854 13:21:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.854 [2024-11-17 13:21:54.944398] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:05.854 [2024-11-17 13:21:54.960470] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09ca0 00:12:05.854 13:21:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.854 13:21:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:05.854 [2024-11-17 13:21:54.962328] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:06.790 13:21:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:06.790 13:21:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:06.790 13:21:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:06.790 13:21:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:06.790 13:21:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:06.790 13:21:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:06.790 13:21:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:06.790 13:21:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.790 13:21:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.790 13:21:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.049 13:21:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:07.049 "name": "raid_bdev1", 00:12:07.049 "uuid": "053746a1-2685-4232-98c1-60c20679a299", 00:12:07.049 "strip_size_kb": 0, 00:12:07.049 "state": "online", 00:12:07.049 "raid_level": "raid1", 00:12:07.049 "superblock": false, 00:12:07.049 "num_base_bdevs": 2, 00:12:07.049 "num_base_bdevs_discovered": 2, 00:12:07.049 "num_base_bdevs_operational": 2, 00:12:07.049 "process": { 00:12:07.049 "type": "rebuild", 00:12:07.049 "target": "spare", 00:12:07.049 "progress": { 00:12:07.049 "blocks": 20480, 00:12:07.049 "percent": 31 00:12:07.049 } 00:12:07.049 }, 00:12:07.049 "base_bdevs_list": [ 00:12:07.049 { 00:12:07.049 "name": "spare", 00:12:07.049 "uuid": "59eb119f-8a89-537a-85cb-4a7e2262a906", 00:12:07.049 "is_configured": true, 00:12:07.049 "data_offset": 0, 00:12:07.049 "data_size": 65536 00:12:07.049 }, 00:12:07.049 { 00:12:07.049 "name": "BaseBdev2", 00:12:07.049 "uuid": "a80e7d14-01c2-5006-9937-d9017b1fabb1", 00:12:07.049 "is_configured": true, 00:12:07.049 "data_offset": 0, 00:12:07.049 "data_size": 65536 00:12:07.049 } 00:12:07.049 ] 00:12:07.049 }' 00:12:07.049 13:21:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:07.049 13:21:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:07.049 13:21:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:07.049 13:21:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:07.049 13:21:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:12:07.049 13:21:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:12:07.049 13:21:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:12:07.049 13:21:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:12:07.049 13:21:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=366 00:12:07.049 13:21:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:07.049 13:21:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:07.049 13:21:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:07.049 13:21:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:07.049 13:21:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:07.049 13:21:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:07.049 13:21:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:07.049 13:21:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:07.049 13:21:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.049 13:21:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.049 13:21:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.049 13:21:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:07.049 "name": "raid_bdev1", 00:12:07.049 "uuid": "053746a1-2685-4232-98c1-60c20679a299", 00:12:07.049 "strip_size_kb": 0, 00:12:07.049 "state": "online", 00:12:07.049 "raid_level": "raid1", 00:12:07.049 "superblock": false, 00:12:07.049 "num_base_bdevs": 2, 00:12:07.049 "num_base_bdevs_discovered": 2, 00:12:07.049 "num_base_bdevs_operational": 2, 00:12:07.049 "process": { 00:12:07.049 "type": "rebuild", 00:12:07.049 "target": "spare", 00:12:07.049 "progress": { 00:12:07.049 "blocks": 22528, 00:12:07.049 "percent": 34 00:12:07.049 } 00:12:07.049 }, 00:12:07.049 "base_bdevs_list": [ 00:12:07.049 { 00:12:07.049 "name": "spare", 00:12:07.049 "uuid": "59eb119f-8a89-537a-85cb-4a7e2262a906", 00:12:07.049 "is_configured": true, 00:12:07.049 "data_offset": 0, 00:12:07.049 "data_size": 65536 00:12:07.049 }, 00:12:07.049 { 00:12:07.049 "name": "BaseBdev2", 00:12:07.049 "uuid": "a80e7d14-01c2-5006-9937-d9017b1fabb1", 00:12:07.049 "is_configured": true, 00:12:07.049 "data_offset": 0, 00:12:07.049 "data_size": 65536 00:12:07.049 } 00:12:07.049 ] 00:12:07.049 }' 00:12:07.049 13:21:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:07.049 13:21:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:07.049 13:21:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:07.049 13:21:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:07.049 13:21:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:08.426 13:21:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:08.426 13:21:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:08.426 13:21:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:08.426 13:21:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:08.426 13:21:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:08.426 13:21:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:08.426 13:21:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.426 13:21:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:08.426 13:21:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.426 13:21:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.426 13:21:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.426 13:21:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:08.426 "name": "raid_bdev1", 00:12:08.426 "uuid": "053746a1-2685-4232-98c1-60c20679a299", 00:12:08.426 "strip_size_kb": 0, 00:12:08.426 "state": "online", 00:12:08.426 "raid_level": "raid1", 00:12:08.426 "superblock": false, 00:12:08.426 "num_base_bdevs": 2, 00:12:08.426 "num_base_bdevs_discovered": 2, 00:12:08.426 "num_base_bdevs_operational": 2, 00:12:08.426 "process": { 00:12:08.426 "type": "rebuild", 00:12:08.426 "target": "spare", 00:12:08.426 "progress": { 00:12:08.426 "blocks": 45056, 00:12:08.426 "percent": 68 00:12:08.426 } 00:12:08.426 }, 00:12:08.426 "base_bdevs_list": [ 00:12:08.426 { 00:12:08.426 "name": "spare", 00:12:08.426 "uuid": "59eb119f-8a89-537a-85cb-4a7e2262a906", 00:12:08.426 "is_configured": true, 00:12:08.426 "data_offset": 0, 00:12:08.426 "data_size": 65536 00:12:08.426 }, 00:12:08.426 { 00:12:08.426 "name": "BaseBdev2", 00:12:08.426 "uuid": "a80e7d14-01c2-5006-9937-d9017b1fabb1", 00:12:08.426 "is_configured": true, 00:12:08.426 "data_offset": 0, 00:12:08.426 "data_size": 65536 00:12:08.426 } 00:12:08.426 ] 00:12:08.426 }' 00:12:08.426 13:21:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:08.426 13:21:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:08.426 13:21:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:08.426 13:21:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:08.426 13:21:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:08.994 [2024-11-17 13:21:58.176233] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:12:08.994 [2024-11-17 13:21:58.176319] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:12:08.994 [2024-11-17 13:21:58.176368] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:09.253 13:21:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:09.253 13:21:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:09.253 13:21:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:09.253 13:21:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:09.253 13:21:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:09.253 13:21:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:09.253 13:21:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:09.253 13:21:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:09.253 13:21:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.253 13:21:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.253 13:21:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.253 13:21:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:09.253 "name": "raid_bdev1", 00:12:09.253 "uuid": "053746a1-2685-4232-98c1-60c20679a299", 00:12:09.253 "strip_size_kb": 0, 00:12:09.253 "state": "online", 00:12:09.253 "raid_level": "raid1", 00:12:09.253 "superblock": false, 00:12:09.253 "num_base_bdevs": 2, 00:12:09.253 "num_base_bdevs_discovered": 2, 00:12:09.253 "num_base_bdevs_operational": 2, 00:12:09.253 "base_bdevs_list": [ 00:12:09.253 { 00:12:09.253 "name": "spare", 00:12:09.253 "uuid": "59eb119f-8a89-537a-85cb-4a7e2262a906", 00:12:09.253 "is_configured": true, 00:12:09.253 "data_offset": 0, 00:12:09.253 "data_size": 65536 00:12:09.253 }, 00:12:09.253 { 00:12:09.253 "name": "BaseBdev2", 00:12:09.253 "uuid": "a80e7d14-01c2-5006-9937-d9017b1fabb1", 00:12:09.253 "is_configured": true, 00:12:09.253 "data_offset": 0, 00:12:09.253 "data_size": 65536 00:12:09.253 } 00:12:09.253 ] 00:12:09.253 }' 00:12:09.253 13:21:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:09.520 13:21:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:12:09.520 13:21:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:09.520 13:21:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:12:09.520 13:21:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:12:09.520 13:21:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:09.521 13:21:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:09.521 13:21:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:09.521 13:21:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:09.521 13:21:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:09.521 13:21:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:09.521 13:21:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:09.521 13:21:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.521 13:21:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.521 13:21:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.521 13:21:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:09.521 "name": "raid_bdev1", 00:12:09.521 "uuid": "053746a1-2685-4232-98c1-60c20679a299", 00:12:09.521 "strip_size_kb": 0, 00:12:09.521 "state": "online", 00:12:09.521 "raid_level": "raid1", 00:12:09.521 "superblock": false, 00:12:09.521 "num_base_bdevs": 2, 00:12:09.521 "num_base_bdevs_discovered": 2, 00:12:09.521 "num_base_bdevs_operational": 2, 00:12:09.521 "base_bdevs_list": [ 00:12:09.521 { 00:12:09.521 "name": "spare", 00:12:09.521 "uuid": "59eb119f-8a89-537a-85cb-4a7e2262a906", 00:12:09.521 "is_configured": true, 00:12:09.521 "data_offset": 0, 00:12:09.521 "data_size": 65536 00:12:09.521 }, 00:12:09.521 { 00:12:09.521 "name": "BaseBdev2", 00:12:09.521 "uuid": "a80e7d14-01c2-5006-9937-d9017b1fabb1", 00:12:09.521 "is_configured": true, 00:12:09.521 "data_offset": 0, 00:12:09.521 "data_size": 65536 00:12:09.521 } 00:12:09.521 ] 00:12:09.521 }' 00:12:09.521 13:21:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:09.521 13:21:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:09.521 13:21:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:09.521 13:21:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:09.521 13:21:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:09.521 13:21:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:09.521 13:21:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:09.521 13:21:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:09.521 13:21:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:09.521 13:21:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:09.521 13:21:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:09.521 13:21:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:09.521 13:21:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:09.521 13:21:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:09.521 13:21:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:09.521 13:21:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:09.521 13:21:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.521 13:21:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.521 13:21:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.521 13:21:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:09.521 "name": "raid_bdev1", 00:12:09.521 "uuid": "053746a1-2685-4232-98c1-60c20679a299", 00:12:09.521 "strip_size_kb": 0, 00:12:09.521 "state": "online", 00:12:09.521 "raid_level": "raid1", 00:12:09.521 "superblock": false, 00:12:09.521 "num_base_bdevs": 2, 00:12:09.521 "num_base_bdevs_discovered": 2, 00:12:09.521 "num_base_bdevs_operational": 2, 00:12:09.521 "base_bdevs_list": [ 00:12:09.521 { 00:12:09.521 "name": "spare", 00:12:09.521 "uuid": "59eb119f-8a89-537a-85cb-4a7e2262a906", 00:12:09.521 "is_configured": true, 00:12:09.521 "data_offset": 0, 00:12:09.521 "data_size": 65536 00:12:09.521 }, 00:12:09.521 { 00:12:09.521 "name": "BaseBdev2", 00:12:09.521 "uuid": "a80e7d14-01c2-5006-9937-d9017b1fabb1", 00:12:09.521 "is_configured": true, 00:12:09.521 "data_offset": 0, 00:12:09.521 "data_size": 65536 00:12:09.521 } 00:12:09.521 ] 00:12:09.521 }' 00:12:09.521 13:21:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:09.521 13:21:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.104 13:21:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:10.104 13:21:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.104 13:21:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.104 [2024-11-17 13:21:59.070925] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:10.104 [2024-11-17 13:21:59.071035] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:10.104 [2024-11-17 13:21:59.071144] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:10.104 [2024-11-17 13:21:59.071288] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:10.104 [2024-11-17 13:21:59.071342] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:10.104 13:21:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.104 13:21:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:10.104 13:21:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:12:10.104 13:21:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.104 13:21:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.104 13:21:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.104 13:21:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:12:10.104 13:21:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:12:10.104 13:21:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:12:10.104 13:21:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:12:10.104 13:21:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:10.105 13:21:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:12:10.105 13:21:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:10.105 13:21:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:10.105 13:21:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:10.105 13:21:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:12:10.105 13:21:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:10.105 13:21:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:10.105 13:21:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:12:10.363 /dev/nbd0 00:12:10.363 13:21:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:10.363 13:21:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:10.363 13:21:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:10.363 13:21:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:12:10.363 13:21:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:10.363 13:21:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:10.363 13:21:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:10.363 13:21:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:12:10.363 13:21:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:10.363 13:21:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:10.363 13:21:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:10.363 1+0 records in 00:12:10.363 1+0 records out 00:12:10.363 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000250593 s, 16.3 MB/s 00:12:10.363 13:21:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:10.363 13:21:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:12:10.363 13:21:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:10.363 13:21:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:10.364 13:21:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:12:10.364 13:21:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:10.364 13:21:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:10.364 13:21:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:12:10.364 /dev/nbd1 00:12:10.621 13:21:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:10.621 13:21:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:10.621 13:21:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:12:10.621 13:21:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:12:10.621 13:21:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:10.621 13:21:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:10.621 13:21:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:12:10.621 13:21:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:12:10.621 13:21:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:10.621 13:21:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:10.621 13:21:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:10.621 1+0 records in 00:12:10.621 1+0 records out 00:12:10.621 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000323716 s, 12.7 MB/s 00:12:10.621 13:21:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:10.621 13:21:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:12:10.622 13:21:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:10.622 13:21:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:10.622 13:21:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:12:10.622 13:21:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:10.622 13:21:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:10.622 13:21:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:12:10.622 13:21:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:12:10.622 13:21:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:10.622 13:21:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:10.622 13:21:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:10.622 13:21:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:12:10.622 13:21:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:10.622 13:21:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:10.880 13:22:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:10.880 13:22:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:10.880 13:22:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:10.880 13:22:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:10.880 13:22:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:10.880 13:22:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:10.880 13:22:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:12:10.880 13:22:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:12:10.880 13:22:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:10.880 13:22:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:11.139 13:22:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:11.139 13:22:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:11.139 13:22:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:11.139 13:22:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:11.139 13:22:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:11.139 13:22:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:11.139 13:22:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:12:11.139 13:22:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:12:11.139 13:22:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:12:11.139 13:22:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 75207 00:12:11.139 13:22:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 75207 ']' 00:12:11.139 13:22:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 75207 00:12:11.139 13:22:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:12:11.139 13:22:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:11.139 13:22:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75207 00:12:11.139 13:22:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:11.139 13:22:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:11.139 13:22:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75207' 00:12:11.139 killing process with pid 75207 00:12:11.139 13:22:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@973 -- # kill 75207 00:12:11.139 Received shutdown signal, test time was about 60.000000 seconds 00:12:11.139 00:12:11.139 Latency(us) 00:12:11.139 [2024-11-17T13:22:00.363Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:11.139 [2024-11-17T13:22:00.363Z] =================================================================================================================== 00:12:11.139 [2024-11-17T13:22:00.363Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:11.139 [2024-11-17 13:22:00.305492] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:11.139 13:22:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@978 -- # wait 75207 00:12:11.398 [2024-11-17 13:22:00.606575] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:12.774 13:22:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:12:12.774 00:12:12.774 real 0m16.643s 00:12:12.774 user 0m18.277s 00:12:12.774 sys 0m3.544s 00:12:12.774 ************************************ 00:12:12.774 END TEST raid_rebuild_test 00:12:12.774 ************************************ 00:12:12.774 13:22:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:12.774 13:22:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.774 13:22:01 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false true 00:12:12.774 13:22:01 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:12:12.774 13:22:01 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:12.774 13:22:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:12.774 ************************************ 00:12:12.774 START TEST raid_rebuild_test_sb 00:12:12.774 ************************************ 00:12:12.774 13:22:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:12:12.774 13:22:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:12.774 13:22:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:12:12.774 13:22:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:12:12.774 13:22:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:12:12.774 13:22:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:12.774 13:22:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:12.774 13:22:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:12.774 13:22:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:12.774 13:22:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:12.774 13:22:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:12.774 13:22:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:12.774 13:22:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:12.774 13:22:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:12.774 13:22:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:12.774 13:22:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:12.774 13:22:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:12.774 13:22:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:12.774 13:22:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:12.774 13:22:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:12.774 13:22:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:12.774 13:22:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:12.774 13:22:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:12.774 13:22:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:12:12.774 13:22:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:12:12.774 13:22:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=75642 00:12:12.774 13:22:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 75642 00:12:12.774 13:22:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:12.774 13:22:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 75642 ']' 00:12:12.774 13:22:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:12.774 13:22:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:12.774 13:22:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:12.774 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:12.775 13:22:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:12.775 13:22:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.775 [2024-11-17 13:22:01.843966] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:12:12.775 [2024-11-17 13:22:01.844126] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75642 ] 00:12:12.775 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:12.775 Zero copy mechanism will not be used. 00:12:13.033 [2024-11-17 13:22:02.014222] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:13.033 [2024-11-17 13:22:02.126233] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:13.292 [2024-11-17 13:22:02.323206] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:13.292 [2024-11-17 13:22:02.323337] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:13.553 13:22:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:13.553 13:22:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:12:13.553 13:22:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:13.553 13:22:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:13.553 13:22:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.553 13:22:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.553 BaseBdev1_malloc 00:12:13.553 13:22:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.553 13:22:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:13.553 13:22:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.553 13:22:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.553 [2024-11-17 13:22:02.726495] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:13.553 [2024-11-17 13:22:02.726621] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:13.553 [2024-11-17 13:22:02.726652] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:13.553 [2024-11-17 13:22:02.726666] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:13.553 [2024-11-17 13:22:02.728810] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:13.553 [2024-11-17 13:22:02.728852] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:13.553 BaseBdev1 00:12:13.553 13:22:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.553 13:22:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:13.553 13:22:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:13.553 13:22:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.553 13:22:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.553 BaseBdev2_malloc 00:12:13.553 13:22:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.553 13:22:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:13.553 13:22:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.553 13:22:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.814 [2024-11-17 13:22:02.777873] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:13.814 [2024-11-17 13:22:02.777967] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:13.814 [2024-11-17 13:22:02.777989] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:13.814 [2024-11-17 13:22:02.778003] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:13.814 [2024-11-17 13:22:02.780059] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:13.814 [2024-11-17 13:22:02.780099] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:13.814 BaseBdev2 00:12:13.814 13:22:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.814 13:22:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:13.814 13:22:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.814 13:22:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.814 spare_malloc 00:12:13.814 13:22:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.814 13:22:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:13.814 13:22:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.814 13:22:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.814 spare_delay 00:12:13.814 13:22:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.814 13:22:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:13.814 13:22:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.814 13:22:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.814 [2024-11-17 13:22:02.851747] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:13.814 [2024-11-17 13:22:02.851863] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:13.814 [2024-11-17 13:22:02.851887] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:12:13.814 [2024-11-17 13:22:02.851899] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:13.814 [2024-11-17 13:22:02.854110] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:13.814 [2024-11-17 13:22:02.854155] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:13.814 spare 00:12:13.814 13:22:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.814 13:22:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:12:13.814 13:22:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.814 13:22:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.814 [2024-11-17 13:22:02.859773] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:13.814 [2024-11-17 13:22:02.861539] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:13.814 [2024-11-17 13:22:02.861704] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:13.814 [2024-11-17 13:22:02.861722] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:13.814 [2024-11-17 13:22:02.861968] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:13.814 [2024-11-17 13:22:02.862133] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:13.814 [2024-11-17 13:22:02.862143] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:13.814 [2024-11-17 13:22:02.862322] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:13.814 13:22:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.815 13:22:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:13.815 13:22:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:13.815 13:22:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:13.815 13:22:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:13.815 13:22:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:13.815 13:22:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:13.815 13:22:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:13.815 13:22:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:13.815 13:22:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:13.815 13:22:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:13.815 13:22:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:13.815 13:22:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:13.815 13:22:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.815 13:22:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.815 13:22:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.815 13:22:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:13.815 "name": "raid_bdev1", 00:12:13.815 "uuid": "5cacb03a-9041-42b8-92b7-367b831715a9", 00:12:13.815 "strip_size_kb": 0, 00:12:13.815 "state": "online", 00:12:13.815 "raid_level": "raid1", 00:12:13.815 "superblock": true, 00:12:13.815 "num_base_bdevs": 2, 00:12:13.815 "num_base_bdevs_discovered": 2, 00:12:13.815 "num_base_bdevs_operational": 2, 00:12:13.815 "base_bdevs_list": [ 00:12:13.815 { 00:12:13.815 "name": "BaseBdev1", 00:12:13.815 "uuid": "d2e07f4e-026f-5a7c-862c-dce3620e87f2", 00:12:13.815 "is_configured": true, 00:12:13.815 "data_offset": 2048, 00:12:13.815 "data_size": 63488 00:12:13.815 }, 00:12:13.815 { 00:12:13.815 "name": "BaseBdev2", 00:12:13.815 "uuid": "e587ea4d-2cec-505a-acc2-1cb1f172d575", 00:12:13.815 "is_configured": true, 00:12:13.815 "data_offset": 2048, 00:12:13.815 "data_size": 63488 00:12:13.815 } 00:12:13.815 ] 00:12:13.815 }' 00:12:13.815 13:22:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:13.815 13:22:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.074 13:22:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:14.074 13:22:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:14.074 13:22:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.074 13:22:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.074 [2024-11-17 13:22:03.283324] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:14.334 13:22:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.334 13:22:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:12:14.334 13:22:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:14.334 13:22:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:14.334 13:22:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.334 13:22:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.334 13:22:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.334 13:22:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:12:14.334 13:22:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:12:14.334 13:22:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:12:14.334 13:22:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:12:14.334 13:22:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:12:14.334 13:22:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:14.334 13:22:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:12:14.334 13:22:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:14.334 13:22:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:14.334 13:22:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:14.334 13:22:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:12:14.334 13:22:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:14.334 13:22:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:14.334 13:22:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:12:14.334 [2024-11-17 13:22:03.538614] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:12:14.334 /dev/nbd0 00:12:14.594 13:22:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:14.594 13:22:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:14.594 13:22:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:14.594 13:22:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:12:14.594 13:22:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:14.594 13:22:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:14.594 13:22:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:14.594 13:22:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:12:14.594 13:22:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:14.594 13:22:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:14.594 13:22:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:14.594 1+0 records in 00:12:14.594 1+0 records out 00:12:14.594 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000580948 s, 7.1 MB/s 00:12:14.594 13:22:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:14.594 13:22:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:12:14.594 13:22:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:14.594 13:22:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:14.594 13:22:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:12:14.594 13:22:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:14.594 13:22:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:14.594 13:22:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:12:14.594 13:22:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:12:14.595 13:22:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:12:18.795 63488+0 records in 00:12:18.795 63488+0 records out 00:12:18.795 32505856 bytes (33 MB, 31 MiB) copied, 3.65395 s, 8.9 MB/s 00:12:18.795 13:22:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:18.795 13:22:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:18.795 13:22:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:18.795 13:22:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:18.795 13:22:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:12:18.795 13:22:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:18.795 13:22:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:18.795 13:22:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:18.795 [2024-11-17 13:22:07.479440] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:18.795 13:22:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:18.795 13:22:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:18.795 13:22:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:18.795 13:22:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:18.795 13:22:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:18.795 13:22:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:12:18.795 13:22:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:12:18.795 13:22:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:18.795 13:22:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.795 13:22:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.795 [2024-11-17 13:22:07.496381] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:18.795 13:22:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.795 13:22:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:18.795 13:22:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:18.795 13:22:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:18.795 13:22:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:18.795 13:22:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:18.795 13:22:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:18.795 13:22:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:18.795 13:22:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:18.795 13:22:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:18.795 13:22:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:18.795 13:22:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:18.795 13:22:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.795 13:22:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.795 13:22:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:18.795 13:22:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.795 13:22:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:18.795 "name": "raid_bdev1", 00:12:18.795 "uuid": "5cacb03a-9041-42b8-92b7-367b831715a9", 00:12:18.795 "strip_size_kb": 0, 00:12:18.795 "state": "online", 00:12:18.795 "raid_level": "raid1", 00:12:18.795 "superblock": true, 00:12:18.795 "num_base_bdevs": 2, 00:12:18.795 "num_base_bdevs_discovered": 1, 00:12:18.795 "num_base_bdevs_operational": 1, 00:12:18.795 "base_bdevs_list": [ 00:12:18.795 { 00:12:18.795 "name": null, 00:12:18.795 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:18.795 "is_configured": false, 00:12:18.795 "data_offset": 0, 00:12:18.795 "data_size": 63488 00:12:18.795 }, 00:12:18.795 { 00:12:18.795 "name": "BaseBdev2", 00:12:18.795 "uuid": "e587ea4d-2cec-505a-acc2-1cb1f172d575", 00:12:18.795 "is_configured": true, 00:12:18.795 "data_offset": 2048, 00:12:18.795 "data_size": 63488 00:12:18.795 } 00:12:18.795 ] 00:12:18.795 }' 00:12:18.795 13:22:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:18.795 13:22:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.795 13:22:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:18.795 13:22:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.795 13:22:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.795 [2024-11-17 13:22:07.951651] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:18.795 [2024-11-17 13:22:07.968877] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3360 00:12:18.795 13:22:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.795 [2024-11-17 13:22:07.970751] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:18.795 13:22:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:20.299 13:22:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:20.299 13:22:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:20.299 13:22:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:20.299 13:22:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:20.299 13:22:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:20.299 13:22:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:20.299 13:22:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:20.299 13:22:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.299 13:22:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.299 13:22:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.299 13:22:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:20.299 "name": "raid_bdev1", 00:12:20.299 "uuid": "5cacb03a-9041-42b8-92b7-367b831715a9", 00:12:20.299 "strip_size_kb": 0, 00:12:20.299 "state": "online", 00:12:20.299 "raid_level": "raid1", 00:12:20.299 "superblock": true, 00:12:20.299 "num_base_bdevs": 2, 00:12:20.299 "num_base_bdevs_discovered": 2, 00:12:20.299 "num_base_bdevs_operational": 2, 00:12:20.299 "process": { 00:12:20.299 "type": "rebuild", 00:12:20.299 "target": "spare", 00:12:20.299 "progress": { 00:12:20.299 "blocks": 20480, 00:12:20.299 "percent": 32 00:12:20.299 } 00:12:20.299 }, 00:12:20.299 "base_bdevs_list": [ 00:12:20.299 { 00:12:20.299 "name": "spare", 00:12:20.299 "uuid": "970f3957-06b8-55ec-a20e-2b6dbae74143", 00:12:20.299 "is_configured": true, 00:12:20.299 "data_offset": 2048, 00:12:20.299 "data_size": 63488 00:12:20.299 }, 00:12:20.299 { 00:12:20.299 "name": "BaseBdev2", 00:12:20.299 "uuid": "e587ea4d-2cec-505a-acc2-1cb1f172d575", 00:12:20.299 "is_configured": true, 00:12:20.299 "data_offset": 2048, 00:12:20.299 "data_size": 63488 00:12:20.299 } 00:12:20.299 ] 00:12:20.299 }' 00:12:20.299 13:22:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:20.299 13:22:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:20.299 13:22:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:20.299 13:22:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:20.299 13:22:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:20.299 13:22:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.299 13:22:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.299 [2024-11-17 13:22:09.130039] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:20.299 [2024-11-17 13:22:09.175466] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:20.299 [2024-11-17 13:22:09.175527] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:20.299 [2024-11-17 13:22:09.175542] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:20.299 [2024-11-17 13:22:09.175551] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:20.299 13:22:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.299 13:22:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:20.299 13:22:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:20.299 13:22:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:20.299 13:22:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:20.299 13:22:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:20.299 13:22:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:20.299 13:22:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:20.300 13:22:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:20.300 13:22:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:20.300 13:22:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:20.300 13:22:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:20.300 13:22:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.300 13:22:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.300 13:22:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:20.300 13:22:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.300 13:22:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:20.300 "name": "raid_bdev1", 00:12:20.300 "uuid": "5cacb03a-9041-42b8-92b7-367b831715a9", 00:12:20.300 "strip_size_kb": 0, 00:12:20.300 "state": "online", 00:12:20.300 "raid_level": "raid1", 00:12:20.300 "superblock": true, 00:12:20.300 "num_base_bdevs": 2, 00:12:20.300 "num_base_bdevs_discovered": 1, 00:12:20.300 "num_base_bdevs_operational": 1, 00:12:20.300 "base_bdevs_list": [ 00:12:20.300 { 00:12:20.300 "name": null, 00:12:20.300 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:20.300 "is_configured": false, 00:12:20.300 "data_offset": 0, 00:12:20.300 "data_size": 63488 00:12:20.300 }, 00:12:20.300 { 00:12:20.300 "name": "BaseBdev2", 00:12:20.300 "uuid": "e587ea4d-2cec-505a-acc2-1cb1f172d575", 00:12:20.300 "is_configured": true, 00:12:20.300 "data_offset": 2048, 00:12:20.300 "data_size": 63488 00:12:20.300 } 00:12:20.300 ] 00:12:20.300 }' 00:12:20.300 13:22:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:20.300 13:22:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.569 13:22:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:20.569 13:22:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:20.569 13:22:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:20.569 13:22:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:20.569 13:22:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:20.569 13:22:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:20.569 13:22:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.569 13:22:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.569 13:22:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:20.569 13:22:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.569 13:22:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:20.569 "name": "raid_bdev1", 00:12:20.569 "uuid": "5cacb03a-9041-42b8-92b7-367b831715a9", 00:12:20.569 "strip_size_kb": 0, 00:12:20.569 "state": "online", 00:12:20.569 "raid_level": "raid1", 00:12:20.569 "superblock": true, 00:12:20.569 "num_base_bdevs": 2, 00:12:20.569 "num_base_bdevs_discovered": 1, 00:12:20.569 "num_base_bdevs_operational": 1, 00:12:20.569 "base_bdevs_list": [ 00:12:20.569 { 00:12:20.569 "name": null, 00:12:20.569 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:20.569 "is_configured": false, 00:12:20.569 "data_offset": 0, 00:12:20.569 "data_size": 63488 00:12:20.569 }, 00:12:20.569 { 00:12:20.569 "name": "BaseBdev2", 00:12:20.569 "uuid": "e587ea4d-2cec-505a-acc2-1cb1f172d575", 00:12:20.570 "is_configured": true, 00:12:20.570 "data_offset": 2048, 00:12:20.570 "data_size": 63488 00:12:20.570 } 00:12:20.570 ] 00:12:20.570 }' 00:12:20.570 13:22:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:20.570 13:22:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:20.570 13:22:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:20.570 13:22:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:20.570 13:22:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:20.570 13:22:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.570 13:22:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.829 [2024-11-17 13:22:09.795031] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:20.829 [2024-11-17 13:22:09.810876] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3430 00:12:20.829 13:22:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.829 13:22:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:20.829 [2024-11-17 13:22:09.812684] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:21.766 13:22:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:21.766 13:22:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:21.766 13:22:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:21.766 13:22:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:21.766 13:22:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:21.766 13:22:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:21.766 13:22:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:21.766 13:22:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.766 13:22:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.766 13:22:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.766 13:22:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:21.766 "name": "raid_bdev1", 00:12:21.766 "uuid": "5cacb03a-9041-42b8-92b7-367b831715a9", 00:12:21.766 "strip_size_kb": 0, 00:12:21.766 "state": "online", 00:12:21.766 "raid_level": "raid1", 00:12:21.766 "superblock": true, 00:12:21.766 "num_base_bdevs": 2, 00:12:21.766 "num_base_bdevs_discovered": 2, 00:12:21.766 "num_base_bdevs_operational": 2, 00:12:21.766 "process": { 00:12:21.766 "type": "rebuild", 00:12:21.766 "target": "spare", 00:12:21.766 "progress": { 00:12:21.766 "blocks": 20480, 00:12:21.766 "percent": 32 00:12:21.766 } 00:12:21.766 }, 00:12:21.766 "base_bdevs_list": [ 00:12:21.766 { 00:12:21.766 "name": "spare", 00:12:21.766 "uuid": "970f3957-06b8-55ec-a20e-2b6dbae74143", 00:12:21.766 "is_configured": true, 00:12:21.766 "data_offset": 2048, 00:12:21.766 "data_size": 63488 00:12:21.766 }, 00:12:21.766 { 00:12:21.766 "name": "BaseBdev2", 00:12:21.766 "uuid": "e587ea4d-2cec-505a-acc2-1cb1f172d575", 00:12:21.766 "is_configured": true, 00:12:21.766 "data_offset": 2048, 00:12:21.766 "data_size": 63488 00:12:21.766 } 00:12:21.766 ] 00:12:21.766 }' 00:12:21.766 13:22:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:21.766 13:22:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:21.766 13:22:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:21.766 13:22:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:21.766 13:22:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:12:21.766 13:22:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:12:21.766 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:12:21.766 13:22:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:12:21.766 13:22:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:12:21.766 13:22:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:12:21.766 13:22:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=380 00:12:21.766 13:22:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:21.766 13:22:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:21.766 13:22:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:21.766 13:22:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:21.766 13:22:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:21.766 13:22:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:21.766 13:22:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:21.766 13:22:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.766 13:22:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.766 13:22:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:21.766 13:22:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.766 13:22:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:21.766 "name": "raid_bdev1", 00:12:21.766 "uuid": "5cacb03a-9041-42b8-92b7-367b831715a9", 00:12:21.766 "strip_size_kb": 0, 00:12:21.766 "state": "online", 00:12:21.766 "raid_level": "raid1", 00:12:21.766 "superblock": true, 00:12:21.766 "num_base_bdevs": 2, 00:12:21.766 "num_base_bdevs_discovered": 2, 00:12:21.766 "num_base_bdevs_operational": 2, 00:12:21.766 "process": { 00:12:21.766 "type": "rebuild", 00:12:21.766 "target": "spare", 00:12:21.766 "progress": { 00:12:21.766 "blocks": 22528, 00:12:21.766 "percent": 35 00:12:21.766 } 00:12:21.766 }, 00:12:21.766 "base_bdevs_list": [ 00:12:21.766 { 00:12:21.766 "name": "spare", 00:12:21.766 "uuid": "970f3957-06b8-55ec-a20e-2b6dbae74143", 00:12:21.766 "is_configured": true, 00:12:21.766 "data_offset": 2048, 00:12:21.766 "data_size": 63488 00:12:21.766 }, 00:12:21.766 { 00:12:21.766 "name": "BaseBdev2", 00:12:21.766 "uuid": "e587ea4d-2cec-505a-acc2-1cb1f172d575", 00:12:21.766 "is_configured": true, 00:12:21.766 "data_offset": 2048, 00:12:21.766 "data_size": 63488 00:12:21.766 } 00:12:21.766 ] 00:12:21.766 }' 00:12:21.766 13:22:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:22.026 13:22:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:22.026 13:22:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:22.026 13:22:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:22.026 13:22:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:22.965 13:22:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:22.965 13:22:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:22.965 13:22:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:22.965 13:22:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:22.965 13:22:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:22.965 13:22:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:22.965 13:22:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:22.965 13:22:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.965 13:22:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.965 13:22:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:22.965 13:22:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.965 13:22:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:22.965 "name": "raid_bdev1", 00:12:22.965 "uuid": "5cacb03a-9041-42b8-92b7-367b831715a9", 00:12:22.965 "strip_size_kb": 0, 00:12:22.965 "state": "online", 00:12:22.965 "raid_level": "raid1", 00:12:22.965 "superblock": true, 00:12:22.965 "num_base_bdevs": 2, 00:12:22.965 "num_base_bdevs_discovered": 2, 00:12:22.965 "num_base_bdevs_operational": 2, 00:12:22.965 "process": { 00:12:22.965 "type": "rebuild", 00:12:22.965 "target": "spare", 00:12:22.965 "progress": { 00:12:22.965 "blocks": 45056, 00:12:22.965 "percent": 70 00:12:22.965 } 00:12:22.965 }, 00:12:22.965 "base_bdevs_list": [ 00:12:22.965 { 00:12:22.965 "name": "spare", 00:12:22.965 "uuid": "970f3957-06b8-55ec-a20e-2b6dbae74143", 00:12:22.965 "is_configured": true, 00:12:22.965 "data_offset": 2048, 00:12:22.965 "data_size": 63488 00:12:22.965 }, 00:12:22.965 { 00:12:22.965 "name": "BaseBdev2", 00:12:22.965 "uuid": "e587ea4d-2cec-505a-acc2-1cb1f172d575", 00:12:22.965 "is_configured": true, 00:12:22.965 "data_offset": 2048, 00:12:22.965 "data_size": 63488 00:12:22.965 } 00:12:22.965 ] 00:12:22.965 }' 00:12:22.965 13:22:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:22.965 13:22:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:22.965 13:22:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:23.225 13:22:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:23.225 13:22:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:23.794 [2024-11-17 13:22:12.926004] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:12:23.794 [2024-11-17 13:22:12.926088] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:12:23.794 [2024-11-17 13:22:12.926234] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:24.054 13:22:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:24.054 13:22:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:24.054 13:22:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:24.054 13:22:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:24.054 13:22:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:24.054 13:22:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:24.054 13:22:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:24.054 13:22:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.054 13:22:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:24.054 13:22:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:24.054 13:22:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.314 13:22:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:24.314 "name": "raid_bdev1", 00:12:24.315 "uuid": "5cacb03a-9041-42b8-92b7-367b831715a9", 00:12:24.315 "strip_size_kb": 0, 00:12:24.315 "state": "online", 00:12:24.315 "raid_level": "raid1", 00:12:24.315 "superblock": true, 00:12:24.315 "num_base_bdevs": 2, 00:12:24.315 "num_base_bdevs_discovered": 2, 00:12:24.315 "num_base_bdevs_operational": 2, 00:12:24.315 "base_bdevs_list": [ 00:12:24.315 { 00:12:24.315 "name": "spare", 00:12:24.315 "uuid": "970f3957-06b8-55ec-a20e-2b6dbae74143", 00:12:24.315 "is_configured": true, 00:12:24.315 "data_offset": 2048, 00:12:24.315 "data_size": 63488 00:12:24.315 }, 00:12:24.315 { 00:12:24.315 "name": "BaseBdev2", 00:12:24.315 "uuid": "e587ea4d-2cec-505a-acc2-1cb1f172d575", 00:12:24.315 "is_configured": true, 00:12:24.315 "data_offset": 2048, 00:12:24.315 "data_size": 63488 00:12:24.315 } 00:12:24.315 ] 00:12:24.315 }' 00:12:24.315 13:22:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:24.315 13:22:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:12:24.315 13:22:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:24.315 13:22:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:12:24.315 13:22:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:12:24.315 13:22:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:24.315 13:22:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:24.315 13:22:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:24.315 13:22:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:24.315 13:22:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:24.315 13:22:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:24.315 13:22:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:24.315 13:22:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.315 13:22:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:24.315 13:22:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.315 13:22:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:24.315 "name": "raid_bdev1", 00:12:24.315 "uuid": "5cacb03a-9041-42b8-92b7-367b831715a9", 00:12:24.315 "strip_size_kb": 0, 00:12:24.315 "state": "online", 00:12:24.315 "raid_level": "raid1", 00:12:24.315 "superblock": true, 00:12:24.315 "num_base_bdevs": 2, 00:12:24.315 "num_base_bdevs_discovered": 2, 00:12:24.315 "num_base_bdevs_operational": 2, 00:12:24.315 "base_bdevs_list": [ 00:12:24.315 { 00:12:24.315 "name": "spare", 00:12:24.315 "uuid": "970f3957-06b8-55ec-a20e-2b6dbae74143", 00:12:24.315 "is_configured": true, 00:12:24.315 "data_offset": 2048, 00:12:24.315 "data_size": 63488 00:12:24.315 }, 00:12:24.315 { 00:12:24.315 "name": "BaseBdev2", 00:12:24.315 "uuid": "e587ea4d-2cec-505a-acc2-1cb1f172d575", 00:12:24.315 "is_configured": true, 00:12:24.315 "data_offset": 2048, 00:12:24.315 "data_size": 63488 00:12:24.315 } 00:12:24.315 ] 00:12:24.315 }' 00:12:24.315 13:22:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:24.315 13:22:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:24.315 13:22:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:24.315 13:22:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:24.315 13:22:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:24.315 13:22:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:24.315 13:22:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:24.315 13:22:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:24.315 13:22:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:24.315 13:22:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:24.315 13:22:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:24.315 13:22:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:24.315 13:22:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:24.315 13:22:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:24.315 13:22:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:24.315 13:22:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.315 13:22:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:24.315 13:22:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:24.575 13:22:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.575 13:22:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:24.575 "name": "raid_bdev1", 00:12:24.575 "uuid": "5cacb03a-9041-42b8-92b7-367b831715a9", 00:12:24.575 "strip_size_kb": 0, 00:12:24.575 "state": "online", 00:12:24.575 "raid_level": "raid1", 00:12:24.575 "superblock": true, 00:12:24.575 "num_base_bdevs": 2, 00:12:24.575 "num_base_bdevs_discovered": 2, 00:12:24.575 "num_base_bdevs_operational": 2, 00:12:24.575 "base_bdevs_list": [ 00:12:24.575 { 00:12:24.575 "name": "spare", 00:12:24.575 "uuid": "970f3957-06b8-55ec-a20e-2b6dbae74143", 00:12:24.575 "is_configured": true, 00:12:24.575 "data_offset": 2048, 00:12:24.575 "data_size": 63488 00:12:24.575 }, 00:12:24.575 { 00:12:24.575 "name": "BaseBdev2", 00:12:24.575 "uuid": "e587ea4d-2cec-505a-acc2-1cb1f172d575", 00:12:24.575 "is_configured": true, 00:12:24.575 "data_offset": 2048, 00:12:24.575 "data_size": 63488 00:12:24.575 } 00:12:24.575 ] 00:12:24.575 }' 00:12:24.575 13:22:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:24.575 13:22:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:24.836 13:22:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:24.836 13:22:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.836 13:22:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:24.836 [2024-11-17 13:22:13.938926] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:24.836 [2024-11-17 13:22:13.939028] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:24.836 [2024-11-17 13:22:13.939130] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:24.836 [2024-11-17 13:22:13.939273] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:24.837 [2024-11-17 13:22:13.939323] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:24.837 13:22:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.837 13:22:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:24.837 13:22:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.837 13:22:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:24.837 13:22:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:12:24.837 13:22:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.837 13:22:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:12:24.837 13:22:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:12:24.837 13:22:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:12:24.837 13:22:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:12:24.837 13:22:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:24.837 13:22:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:12:24.837 13:22:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:24.837 13:22:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:24.837 13:22:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:24.837 13:22:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:12:24.837 13:22:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:24.837 13:22:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:24.837 13:22:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:12:25.098 /dev/nbd0 00:12:25.098 13:22:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:25.098 13:22:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:25.098 13:22:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:25.098 13:22:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:12:25.098 13:22:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:25.098 13:22:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:25.098 13:22:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:25.098 13:22:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:12:25.098 13:22:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:25.098 13:22:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:25.098 13:22:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:25.098 1+0 records in 00:12:25.099 1+0 records out 00:12:25.099 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000390011 s, 10.5 MB/s 00:12:25.099 13:22:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:25.099 13:22:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:12:25.099 13:22:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:25.099 13:22:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:25.099 13:22:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:12:25.099 13:22:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:25.099 13:22:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:25.099 13:22:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:12:25.358 /dev/nbd1 00:12:25.358 13:22:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:25.358 13:22:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:25.358 13:22:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:12:25.358 13:22:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:12:25.358 13:22:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:25.358 13:22:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:25.358 13:22:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:12:25.358 13:22:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:12:25.358 13:22:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:25.358 13:22:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:25.358 13:22:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:25.358 1+0 records in 00:12:25.358 1+0 records out 00:12:25.358 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00037993 s, 10.8 MB/s 00:12:25.358 13:22:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:25.358 13:22:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:12:25.358 13:22:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:25.358 13:22:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:25.358 13:22:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:12:25.358 13:22:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:25.358 13:22:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:25.358 13:22:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:12:25.618 13:22:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:12:25.618 13:22:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:25.618 13:22:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:25.618 13:22:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:25.618 13:22:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:12:25.618 13:22:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:25.618 13:22:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:25.885 13:22:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:25.885 13:22:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:25.885 13:22:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:25.885 13:22:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:25.885 13:22:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:25.885 13:22:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:25.885 13:22:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:12:25.885 13:22:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:12:25.885 13:22:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:25.885 13:22:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:25.885 13:22:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:26.146 13:22:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:26.146 13:22:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:26.146 13:22:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:26.146 13:22:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:26.146 13:22:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:26.146 13:22:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:12:26.146 13:22:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:12:26.146 13:22:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:12:26.146 13:22:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:12:26.146 13:22:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.146 13:22:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:26.146 13:22:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.146 13:22:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:26.146 13:22:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.146 13:22:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:26.146 [2024-11-17 13:22:15.137449] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:26.146 [2024-11-17 13:22:15.137533] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:26.146 [2024-11-17 13:22:15.137557] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:26.146 [2024-11-17 13:22:15.137566] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:26.146 [2024-11-17 13:22:15.139816] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:26.146 [2024-11-17 13:22:15.139853] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:26.146 [2024-11-17 13:22:15.139944] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:12:26.146 [2024-11-17 13:22:15.139992] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:26.146 [2024-11-17 13:22:15.140140] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:26.146 spare 00:12:26.146 13:22:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.146 13:22:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:12:26.146 13:22:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.146 13:22:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:26.146 [2024-11-17 13:22:15.240052] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:12:26.146 [2024-11-17 13:22:15.240088] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:26.146 [2024-11-17 13:22:15.240408] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1ae0 00:12:26.146 [2024-11-17 13:22:15.240584] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:12:26.146 [2024-11-17 13:22:15.240597] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:12:26.146 [2024-11-17 13:22:15.240831] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:26.146 13:22:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.146 13:22:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:26.146 13:22:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:26.146 13:22:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:26.146 13:22:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:26.146 13:22:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:26.146 13:22:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:26.146 13:22:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:26.146 13:22:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:26.146 13:22:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:26.146 13:22:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:26.146 13:22:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:26.146 13:22:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:26.146 13:22:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.146 13:22:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:26.146 13:22:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.146 13:22:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:26.146 "name": "raid_bdev1", 00:12:26.146 "uuid": "5cacb03a-9041-42b8-92b7-367b831715a9", 00:12:26.146 "strip_size_kb": 0, 00:12:26.146 "state": "online", 00:12:26.146 "raid_level": "raid1", 00:12:26.146 "superblock": true, 00:12:26.146 "num_base_bdevs": 2, 00:12:26.146 "num_base_bdevs_discovered": 2, 00:12:26.146 "num_base_bdevs_operational": 2, 00:12:26.146 "base_bdevs_list": [ 00:12:26.146 { 00:12:26.146 "name": "spare", 00:12:26.146 "uuid": "970f3957-06b8-55ec-a20e-2b6dbae74143", 00:12:26.146 "is_configured": true, 00:12:26.146 "data_offset": 2048, 00:12:26.146 "data_size": 63488 00:12:26.146 }, 00:12:26.146 { 00:12:26.146 "name": "BaseBdev2", 00:12:26.146 "uuid": "e587ea4d-2cec-505a-acc2-1cb1f172d575", 00:12:26.146 "is_configured": true, 00:12:26.146 "data_offset": 2048, 00:12:26.146 "data_size": 63488 00:12:26.146 } 00:12:26.146 ] 00:12:26.147 }' 00:12:26.147 13:22:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:26.147 13:22:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:26.406 13:22:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:26.406 13:22:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:26.406 13:22:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:26.406 13:22:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:26.406 13:22:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:26.406 13:22:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:26.406 13:22:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:26.666 13:22:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.666 13:22:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:26.666 13:22:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.666 13:22:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:26.666 "name": "raid_bdev1", 00:12:26.666 "uuid": "5cacb03a-9041-42b8-92b7-367b831715a9", 00:12:26.666 "strip_size_kb": 0, 00:12:26.666 "state": "online", 00:12:26.666 "raid_level": "raid1", 00:12:26.666 "superblock": true, 00:12:26.666 "num_base_bdevs": 2, 00:12:26.666 "num_base_bdevs_discovered": 2, 00:12:26.666 "num_base_bdevs_operational": 2, 00:12:26.666 "base_bdevs_list": [ 00:12:26.666 { 00:12:26.666 "name": "spare", 00:12:26.666 "uuid": "970f3957-06b8-55ec-a20e-2b6dbae74143", 00:12:26.666 "is_configured": true, 00:12:26.666 "data_offset": 2048, 00:12:26.666 "data_size": 63488 00:12:26.666 }, 00:12:26.666 { 00:12:26.666 "name": "BaseBdev2", 00:12:26.666 "uuid": "e587ea4d-2cec-505a-acc2-1cb1f172d575", 00:12:26.666 "is_configured": true, 00:12:26.666 "data_offset": 2048, 00:12:26.666 "data_size": 63488 00:12:26.666 } 00:12:26.666 ] 00:12:26.666 }' 00:12:26.666 13:22:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:26.666 13:22:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:26.666 13:22:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:26.666 13:22:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:26.666 13:22:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:26.666 13:22:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.666 13:22:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:26.667 13:22:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:12:26.667 13:22:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.667 13:22:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:12:26.667 13:22:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:26.667 13:22:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.667 13:22:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:26.667 [2024-11-17 13:22:15.832337] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:26.667 13:22:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.667 13:22:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:26.667 13:22:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:26.667 13:22:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:26.667 13:22:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:26.667 13:22:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:26.667 13:22:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:26.667 13:22:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:26.667 13:22:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:26.667 13:22:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:26.667 13:22:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:26.667 13:22:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:26.667 13:22:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:26.667 13:22:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.667 13:22:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:26.667 13:22:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.667 13:22:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:26.667 "name": "raid_bdev1", 00:12:26.667 "uuid": "5cacb03a-9041-42b8-92b7-367b831715a9", 00:12:26.667 "strip_size_kb": 0, 00:12:26.667 "state": "online", 00:12:26.667 "raid_level": "raid1", 00:12:26.667 "superblock": true, 00:12:26.667 "num_base_bdevs": 2, 00:12:26.667 "num_base_bdevs_discovered": 1, 00:12:26.667 "num_base_bdevs_operational": 1, 00:12:26.667 "base_bdevs_list": [ 00:12:26.667 { 00:12:26.667 "name": null, 00:12:26.667 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:26.667 "is_configured": false, 00:12:26.667 "data_offset": 0, 00:12:26.667 "data_size": 63488 00:12:26.667 }, 00:12:26.667 { 00:12:26.667 "name": "BaseBdev2", 00:12:26.667 "uuid": "e587ea4d-2cec-505a-acc2-1cb1f172d575", 00:12:26.667 "is_configured": true, 00:12:26.667 "data_offset": 2048, 00:12:26.667 "data_size": 63488 00:12:26.667 } 00:12:26.667 ] 00:12:26.667 }' 00:12:26.667 13:22:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:26.667 13:22:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:27.236 13:22:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:27.236 13:22:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.236 13:22:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:27.236 [2024-11-17 13:22:16.283553] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:27.236 [2024-11-17 13:22:16.283834] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:12:27.236 [2024-11-17 13:22:16.283897] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:12:27.236 [2024-11-17 13:22:16.283966] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:27.236 [2024-11-17 13:22:16.298977] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1bb0 00:12:27.236 13:22:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.236 13:22:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:12:27.236 [2024-11-17 13:22:16.300843] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:28.176 13:22:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:28.176 13:22:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:28.176 13:22:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:28.176 13:22:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:28.176 13:22:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:28.176 13:22:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:28.176 13:22:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:28.176 13:22:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.176 13:22:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:28.176 13:22:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.176 13:22:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:28.176 "name": "raid_bdev1", 00:12:28.176 "uuid": "5cacb03a-9041-42b8-92b7-367b831715a9", 00:12:28.176 "strip_size_kb": 0, 00:12:28.176 "state": "online", 00:12:28.176 "raid_level": "raid1", 00:12:28.176 "superblock": true, 00:12:28.176 "num_base_bdevs": 2, 00:12:28.176 "num_base_bdevs_discovered": 2, 00:12:28.176 "num_base_bdevs_operational": 2, 00:12:28.176 "process": { 00:12:28.176 "type": "rebuild", 00:12:28.176 "target": "spare", 00:12:28.176 "progress": { 00:12:28.176 "blocks": 20480, 00:12:28.176 "percent": 32 00:12:28.176 } 00:12:28.176 }, 00:12:28.176 "base_bdevs_list": [ 00:12:28.176 { 00:12:28.176 "name": "spare", 00:12:28.176 "uuid": "970f3957-06b8-55ec-a20e-2b6dbae74143", 00:12:28.176 "is_configured": true, 00:12:28.176 "data_offset": 2048, 00:12:28.176 "data_size": 63488 00:12:28.176 }, 00:12:28.176 { 00:12:28.176 "name": "BaseBdev2", 00:12:28.176 "uuid": "e587ea4d-2cec-505a-acc2-1cb1f172d575", 00:12:28.176 "is_configured": true, 00:12:28.176 "data_offset": 2048, 00:12:28.176 "data_size": 63488 00:12:28.176 } 00:12:28.176 ] 00:12:28.176 }' 00:12:28.176 13:22:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:28.436 13:22:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:28.436 13:22:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:28.436 13:22:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:28.436 13:22:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:12:28.436 13:22:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.436 13:22:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:28.436 [2024-11-17 13:22:17.469092] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:28.436 [2024-11-17 13:22:17.505549] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:28.436 [2024-11-17 13:22:17.505658] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:28.436 [2024-11-17 13:22:17.505674] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:28.436 [2024-11-17 13:22:17.505683] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:28.436 13:22:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.436 13:22:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:28.436 13:22:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:28.436 13:22:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:28.436 13:22:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:28.436 13:22:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:28.436 13:22:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:28.436 13:22:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:28.436 13:22:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:28.436 13:22:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:28.436 13:22:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:28.436 13:22:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:28.436 13:22:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.436 13:22:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:28.436 13:22:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:28.436 13:22:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.436 13:22:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:28.436 "name": "raid_bdev1", 00:12:28.436 "uuid": "5cacb03a-9041-42b8-92b7-367b831715a9", 00:12:28.436 "strip_size_kb": 0, 00:12:28.436 "state": "online", 00:12:28.436 "raid_level": "raid1", 00:12:28.436 "superblock": true, 00:12:28.436 "num_base_bdevs": 2, 00:12:28.436 "num_base_bdevs_discovered": 1, 00:12:28.436 "num_base_bdevs_operational": 1, 00:12:28.436 "base_bdevs_list": [ 00:12:28.436 { 00:12:28.436 "name": null, 00:12:28.436 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:28.436 "is_configured": false, 00:12:28.436 "data_offset": 0, 00:12:28.436 "data_size": 63488 00:12:28.436 }, 00:12:28.436 { 00:12:28.436 "name": "BaseBdev2", 00:12:28.436 "uuid": "e587ea4d-2cec-505a-acc2-1cb1f172d575", 00:12:28.436 "is_configured": true, 00:12:28.436 "data_offset": 2048, 00:12:28.436 "data_size": 63488 00:12:28.436 } 00:12:28.436 ] 00:12:28.436 }' 00:12:28.436 13:22:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:28.436 13:22:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:29.006 13:22:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:29.006 13:22:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.006 13:22:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:29.006 [2024-11-17 13:22:17.942791] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:29.006 [2024-11-17 13:22:17.942924] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:29.006 [2024-11-17 13:22:17.942963] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:12:29.006 [2024-11-17 13:22:17.942991] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:29.006 [2024-11-17 13:22:17.943504] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:29.006 [2024-11-17 13:22:17.943567] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:29.006 [2024-11-17 13:22:17.943703] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:12:29.006 [2024-11-17 13:22:17.943746] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:12:29.006 [2024-11-17 13:22:17.943798] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:12:29.006 [2024-11-17 13:22:17.943861] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:29.006 [2024-11-17 13:22:17.959371] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:12:29.006 spare 00:12:29.006 13:22:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.006 [2024-11-17 13:22:17.961205] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:29.006 13:22:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:12:29.962 13:22:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:29.962 13:22:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:29.962 13:22:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:29.962 13:22:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:29.962 13:22:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:29.962 13:22:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:29.962 13:22:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.962 13:22:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:29.962 13:22:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:29.962 13:22:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.962 13:22:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:29.962 "name": "raid_bdev1", 00:12:29.962 "uuid": "5cacb03a-9041-42b8-92b7-367b831715a9", 00:12:29.962 "strip_size_kb": 0, 00:12:29.962 "state": "online", 00:12:29.962 "raid_level": "raid1", 00:12:29.962 "superblock": true, 00:12:29.962 "num_base_bdevs": 2, 00:12:29.962 "num_base_bdevs_discovered": 2, 00:12:29.962 "num_base_bdevs_operational": 2, 00:12:29.962 "process": { 00:12:29.962 "type": "rebuild", 00:12:29.962 "target": "spare", 00:12:29.962 "progress": { 00:12:29.962 "blocks": 20480, 00:12:29.962 "percent": 32 00:12:29.962 } 00:12:29.962 }, 00:12:29.962 "base_bdevs_list": [ 00:12:29.962 { 00:12:29.962 "name": "spare", 00:12:29.962 "uuid": "970f3957-06b8-55ec-a20e-2b6dbae74143", 00:12:29.962 "is_configured": true, 00:12:29.962 "data_offset": 2048, 00:12:29.962 "data_size": 63488 00:12:29.962 }, 00:12:29.962 { 00:12:29.962 "name": "BaseBdev2", 00:12:29.962 "uuid": "e587ea4d-2cec-505a-acc2-1cb1f172d575", 00:12:29.962 "is_configured": true, 00:12:29.962 "data_offset": 2048, 00:12:29.962 "data_size": 63488 00:12:29.962 } 00:12:29.962 ] 00:12:29.962 }' 00:12:29.962 13:22:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:29.962 13:22:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:29.962 13:22:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:29.962 13:22:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:29.962 13:22:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:12:29.962 13:22:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.962 13:22:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:29.962 [2024-11-17 13:22:19.106093] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:29.962 [2024-11-17 13:22:19.165916] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:29.962 [2024-11-17 13:22:19.166043] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:29.962 [2024-11-17 13:22:19.166091] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:29.962 [2024-11-17 13:22:19.166110] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:30.245 13:22:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.245 13:22:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:30.245 13:22:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:30.245 13:22:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:30.245 13:22:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:30.245 13:22:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:30.245 13:22:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:30.245 13:22:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:30.245 13:22:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:30.245 13:22:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:30.245 13:22:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:30.245 13:22:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:30.245 13:22:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:30.245 13:22:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.245 13:22:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.245 13:22:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.245 13:22:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:30.245 "name": "raid_bdev1", 00:12:30.245 "uuid": "5cacb03a-9041-42b8-92b7-367b831715a9", 00:12:30.245 "strip_size_kb": 0, 00:12:30.245 "state": "online", 00:12:30.245 "raid_level": "raid1", 00:12:30.245 "superblock": true, 00:12:30.245 "num_base_bdevs": 2, 00:12:30.245 "num_base_bdevs_discovered": 1, 00:12:30.245 "num_base_bdevs_operational": 1, 00:12:30.245 "base_bdevs_list": [ 00:12:30.245 { 00:12:30.245 "name": null, 00:12:30.245 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:30.245 "is_configured": false, 00:12:30.245 "data_offset": 0, 00:12:30.245 "data_size": 63488 00:12:30.245 }, 00:12:30.245 { 00:12:30.245 "name": "BaseBdev2", 00:12:30.245 "uuid": "e587ea4d-2cec-505a-acc2-1cb1f172d575", 00:12:30.245 "is_configured": true, 00:12:30.245 "data_offset": 2048, 00:12:30.245 "data_size": 63488 00:12:30.245 } 00:12:30.245 ] 00:12:30.245 }' 00:12:30.245 13:22:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:30.245 13:22:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.504 13:22:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:30.504 13:22:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:30.504 13:22:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:30.504 13:22:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:30.504 13:22:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:30.504 13:22:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:30.504 13:22:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:30.504 13:22:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.504 13:22:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.504 13:22:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.504 13:22:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:30.504 "name": "raid_bdev1", 00:12:30.504 "uuid": "5cacb03a-9041-42b8-92b7-367b831715a9", 00:12:30.504 "strip_size_kb": 0, 00:12:30.504 "state": "online", 00:12:30.504 "raid_level": "raid1", 00:12:30.504 "superblock": true, 00:12:30.504 "num_base_bdevs": 2, 00:12:30.504 "num_base_bdevs_discovered": 1, 00:12:30.504 "num_base_bdevs_operational": 1, 00:12:30.504 "base_bdevs_list": [ 00:12:30.504 { 00:12:30.504 "name": null, 00:12:30.504 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:30.504 "is_configured": false, 00:12:30.504 "data_offset": 0, 00:12:30.504 "data_size": 63488 00:12:30.504 }, 00:12:30.504 { 00:12:30.504 "name": "BaseBdev2", 00:12:30.504 "uuid": "e587ea4d-2cec-505a-acc2-1cb1f172d575", 00:12:30.504 "is_configured": true, 00:12:30.504 "data_offset": 2048, 00:12:30.504 "data_size": 63488 00:12:30.504 } 00:12:30.504 ] 00:12:30.504 }' 00:12:30.504 13:22:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:30.504 13:22:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:30.504 13:22:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:30.764 13:22:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:30.764 13:22:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:12:30.764 13:22:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.764 13:22:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.764 13:22:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.764 13:22:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:30.764 13:22:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.764 13:22:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.764 [2024-11-17 13:22:19.773753] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:30.764 [2024-11-17 13:22:19.773807] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:30.764 [2024-11-17 13:22:19.773828] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:12:30.764 [2024-11-17 13:22:19.773846] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:30.764 [2024-11-17 13:22:19.774288] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:30.764 [2024-11-17 13:22:19.774306] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:30.764 [2024-11-17 13:22:19.774380] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:12:30.764 [2024-11-17 13:22:19.774394] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:12:30.764 [2024-11-17 13:22:19.774403] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:12:30.764 [2024-11-17 13:22:19.774413] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:12:30.764 BaseBdev1 00:12:30.764 13:22:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.764 13:22:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:12:31.703 13:22:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:31.703 13:22:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:31.703 13:22:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:31.703 13:22:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:31.703 13:22:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:31.703 13:22:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:31.703 13:22:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:31.703 13:22:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:31.703 13:22:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:31.703 13:22:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:31.703 13:22:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:31.703 13:22:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:31.703 13:22:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.703 13:22:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:31.703 13:22:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.703 13:22:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:31.703 "name": "raid_bdev1", 00:12:31.703 "uuid": "5cacb03a-9041-42b8-92b7-367b831715a9", 00:12:31.703 "strip_size_kb": 0, 00:12:31.703 "state": "online", 00:12:31.703 "raid_level": "raid1", 00:12:31.703 "superblock": true, 00:12:31.703 "num_base_bdevs": 2, 00:12:31.703 "num_base_bdevs_discovered": 1, 00:12:31.703 "num_base_bdevs_operational": 1, 00:12:31.703 "base_bdevs_list": [ 00:12:31.703 { 00:12:31.703 "name": null, 00:12:31.703 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:31.703 "is_configured": false, 00:12:31.703 "data_offset": 0, 00:12:31.703 "data_size": 63488 00:12:31.703 }, 00:12:31.703 { 00:12:31.703 "name": "BaseBdev2", 00:12:31.703 "uuid": "e587ea4d-2cec-505a-acc2-1cb1f172d575", 00:12:31.703 "is_configured": true, 00:12:31.703 "data_offset": 2048, 00:12:31.703 "data_size": 63488 00:12:31.703 } 00:12:31.703 ] 00:12:31.703 }' 00:12:31.703 13:22:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:31.703 13:22:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.273 13:22:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:32.273 13:22:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:32.273 13:22:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:32.273 13:22:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:32.273 13:22:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:32.273 13:22:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:32.273 13:22:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.273 13:22:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.273 13:22:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:32.273 13:22:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.273 13:22:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:32.273 "name": "raid_bdev1", 00:12:32.273 "uuid": "5cacb03a-9041-42b8-92b7-367b831715a9", 00:12:32.273 "strip_size_kb": 0, 00:12:32.273 "state": "online", 00:12:32.273 "raid_level": "raid1", 00:12:32.273 "superblock": true, 00:12:32.273 "num_base_bdevs": 2, 00:12:32.273 "num_base_bdevs_discovered": 1, 00:12:32.273 "num_base_bdevs_operational": 1, 00:12:32.273 "base_bdevs_list": [ 00:12:32.273 { 00:12:32.273 "name": null, 00:12:32.273 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:32.273 "is_configured": false, 00:12:32.273 "data_offset": 0, 00:12:32.273 "data_size": 63488 00:12:32.273 }, 00:12:32.273 { 00:12:32.273 "name": "BaseBdev2", 00:12:32.273 "uuid": "e587ea4d-2cec-505a-acc2-1cb1f172d575", 00:12:32.273 "is_configured": true, 00:12:32.273 "data_offset": 2048, 00:12:32.273 "data_size": 63488 00:12:32.273 } 00:12:32.273 ] 00:12:32.273 }' 00:12:32.273 13:22:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:32.273 13:22:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:32.273 13:22:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:32.273 13:22:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:32.273 13:22:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:32.273 13:22:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:12:32.273 13:22:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:32.273 13:22:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:12:32.273 13:22:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:32.273 13:22:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:12:32.273 13:22:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:32.273 13:22:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:32.273 13:22:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.273 13:22:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.273 [2024-11-17 13:22:21.407045] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:32.273 [2024-11-17 13:22:21.407299] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:12:32.273 [2024-11-17 13:22:21.407355] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:12:32.273 request: 00:12:32.273 { 00:12:32.273 "base_bdev": "BaseBdev1", 00:12:32.273 "raid_bdev": "raid_bdev1", 00:12:32.273 "method": "bdev_raid_add_base_bdev", 00:12:32.273 "req_id": 1 00:12:32.273 } 00:12:32.273 Got JSON-RPC error response 00:12:32.273 response: 00:12:32.273 { 00:12:32.273 "code": -22, 00:12:32.273 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:12:32.273 } 00:12:32.273 13:22:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:12:32.273 13:22:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:12:32.273 13:22:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:32.273 13:22:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:32.273 13:22:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:32.273 13:22:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:12:33.212 13:22:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:33.212 13:22:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:33.212 13:22:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:33.212 13:22:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:33.212 13:22:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:33.212 13:22:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:33.212 13:22:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:33.212 13:22:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:33.212 13:22:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:33.212 13:22:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:33.212 13:22:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:33.212 13:22:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:33.212 13:22:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.212 13:22:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.472 13:22:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.472 13:22:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:33.472 "name": "raid_bdev1", 00:12:33.472 "uuid": "5cacb03a-9041-42b8-92b7-367b831715a9", 00:12:33.472 "strip_size_kb": 0, 00:12:33.472 "state": "online", 00:12:33.472 "raid_level": "raid1", 00:12:33.472 "superblock": true, 00:12:33.472 "num_base_bdevs": 2, 00:12:33.472 "num_base_bdevs_discovered": 1, 00:12:33.472 "num_base_bdevs_operational": 1, 00:12:33.472 "base_bdevs_list": [ 00:12:33.472 { 00:12:33.472 "name": null, 00:12:33.472 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:33.472 "is_configured": false, 00:12:33.472 "data_offset": 0, 00:12:33.472 "data_size": 63488 00:12:33.472 }, 00:12:33.472 { 00:12:33.472 "name": "BaseBdev2", 00:12:33.472 "uuid": "e587ea4d-2cec-505a-acc2-1cb1f172d575", 00:12:33.472 "is_configured": true, 00:12:33.472 "data_offset": 2048, 00:12:33.472 "data_size": 63488 00:12:33.472 } 00:12:33.472 ] 00:12:33.472 }' 00:12:33.472 13:22:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:33.472 13:22:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.734 13:22:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:33.734 13:22:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:33.734 13:22:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:33.734 13:22:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:33.734 13:22:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:33.734 13:22:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:33.734 13:22:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:33.734 13:22:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.734 13:22:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.734 13:22:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.734 13:22:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:33.734 "name": "raid_bdev1", 00:12:33.734 "uuid": "5cacb03a-9041-42b8-92b7-367b831715a9", 00:12:33.734 "strip_size_kb": 0, 00:12:33.734 "state": "online", 00:12:33.734 "raid_level": "raid1", 00:12:33.734 "superblock": true, 00:12:33.734 "num_base_bdevs": 2, 00:12:33.734 "num_base_bdevs_discovered": 1, 00:12:33.734 "num_base_bdevs_operational": 1, 00:12:33.734 "base_bdevs_list": [ 00:12:33.734 { 00:12:33.734 "name": null, 00:12:33.734 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:33.734 "is_configured": false, 00:12:33.734 "data_offset": 0, 00:12:33.734 "data_size": 63488 00:12:33.734 }, 00:12:33.734 { 00:12:33.734 "name": "BaseBdev2", 00:12:33.734 "uuid": "e587ea4d-2cec-505a-acc2-1cb1f172d575", 00:12:33.734 "is_configured": true, 00:12:33.734 "data_offset": 2048, 00:12:33.734 "data_size": 63488 00:12:33.734 } 00:12:33.734 ] 00:12:33.734 }' 00:12:33.734 13:22:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:33.734 13:22:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:33.734 13:22:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:33.994 13:22:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:33.994 13:22:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 75642 00:12:33.994 13:22:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 75642 ']' 00:12:33.994 13:22:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 75642 00:12:33.994 13:22:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:12:33.994 13:22:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:33.994 13:22:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75642 00:12:33.994 13:22:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:33.994 killing process with pid 75642 00:12:33.994 Received shutdown signal, test time was about 60.000000 seconds 00:12:33.994 00:12:33.994 Latency(us) 00:12:33.994 [2024-11-17T13:22:23.218Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:33.994 [2024-11-17T13:22:23.218Z] =================================================================================================================== 00:12:33.994 [2024-11-17T13:22:23.218Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:33.994 13:22:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:33.994 13:22:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75642' 00:12:33.994 13:22:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 75642 00:12:33.994 [2024-11-17 13:22:23.035223] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:33.994 [2024-11-17 13:22:23.035357] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:33.995 [2024-11-17 13:22:23.035406] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:33.995 [2024-11-17 13:22:23.035418] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:12:33.995 13:22:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 75642 00:12:34.254 [2024-11-17 13:22:23.319810] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:35.194 13:22:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:12:35.194 00:12:35.194 real 0m22.616s 00:12:35.194 user 0m27.775s 00:12:35.194 sys 0m3.623s 00:12:35.194 ************************************ 00:12:35.194 END TEST raid_rebuild_test_sb 00:12:35.194 ************************************ 00:12:35.194 13:22:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:35.194 13:22:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.456 13:22:24 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true true 00:12:35.456 13:22:24 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:12:35.456 13:22:24 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:35.456 13:22:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:35.456 ************************************ 00:12:35.456 START TEST raid_rebuild_test_io 00:12:35.456 ************************************ 00:12:35.456 13:22:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 false true true 00:12:35.456 13:22:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:35.456 13:22:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:12:35.456 13:22:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:12:35.456 13:22:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:12:35.456 13:22:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:35.456 13:22:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:35.456 13:22:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:35.456 13:22:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:35.456 13:22:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:35.456 13:22:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:35.456 13:22:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:35.456 13:22:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:35.456 13:22:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:35.456 13:22:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:35.456 13:22:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:35.456 13:22:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:35.456 13:22:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:35.456 13:22:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:35.456 13:22:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:35.456 13:22:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:35.456 13:22:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:35.456 13:22:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:35.456 13:22:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:12:35.456 13:22:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=76361 00:12:35.456 13:22:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:35.456 13:22:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 76361 00:12:35.456 13:22:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # '[' -z 76361 ']' 00:12:35.456 13:22:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:35.456 13:22:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:35.456 13:22:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:35.456 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:35.456 13:22:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:35.456 13:22:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:35.456 [2024-11-17 13:22:24.551460] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:12:35.456 [2024-11-17 13:22:24.551676] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:12:35.456 Zero copy mechanism will not be used. 00:12:35.456 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76361 ] 00:12:35.717 [2024-11-17 13:22:24.743941] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:35.717 [2024-11-17 13:22:24.852550] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:35.978 [2024-11-17 13:22:25.041554] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:35.978 [2024-11-17 13:22:25.041587] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:36.238 13:22:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:36.238 13:22:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # return 0 00:12:36.238 13:22:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:36.238 13:22:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:36.238 13:22:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.238 13:22:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:36.238 BaseBdev1_malloc 00:12:36.238 13:22:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.238 13:22:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:36.238 13:22:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.238 13:22:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:36.238 [2024-11-17 13:22:25.415320] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:36.238 [2024-11-17 13:22:25.415390] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:36.238 [2024-11-17 13:22:25.415415] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:36.238 [2024-11-17 13:22:25.415426] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:36.238 [2024-11-17 13:22:25.417429] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:36.238 [2024-11-17 13:22:25.417469] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:36.238 BaseBdev1 00:12:36.238 13:22:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.238 13:22:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:36.238 13:22:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:36.238 13:22:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.238 13:22:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:36.498 BaseBdev2_malloc 00:12:36.498 13:22:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.498 13:22:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:36.498 13:22:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.498 13:22:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:36.498 [2024-11-17 13:22:25.470302] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:36.498 [2024-11-17 13:22:25.470369] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:36.498 [2024-11-17 13:22:25.470391] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:36.498 [2024-11-17 13:22:25.470402] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:36.498 [2024-11-17 13:22:25.472435] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:36.498 [2024-11-17 13:22:25.472473] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:36.498 BaseBdev2 00:12:36.498 13:22:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.498 13:22:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:36.498 13:22:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.498 13:22:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:36.498 spare_malloc 00:12:36.498 13:22:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.498 13:22:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:36.498 13:22:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.498 13:22:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:36.498 spare_delay 00:12:36.498 13:22:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.498 13:22:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:36.498 13:22:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.498 13:22:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:36.499 [2024-11-17 13:22:25.570859] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:36.499 [2024-11-17 13:22:25.570914] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:36.499 [2024-11-17 13:22:25.570933] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:12:36.499 [2024-11-17 13:22:25.570943] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:36.499 [2024-11-17 13:22:25.572951] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:36.499 [2024-11-17 13:22:25.573088] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:36.499 spare 00:12:36.499 13:22:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.499 13:22:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:12:36.499 13:22:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.499 13:22:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:36.499 [2024-11-17 13:22:25.582878] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:36.499 [2024-11-17 13:22:25.584777] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:36.499 [2024-11-17 13:22:25.584862] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:36.499 [2024-11-17 13:22:25.584876] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:12:36.499 [2024-11-17 13:22:25.585112] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:36.499 [2024-11-17 13:22:25.585276] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:36.499 [2024-11-17 13:22:25.585288] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:36.499 [2024-11-17 13:22:25.585438] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:36.499 13:22:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.499 13:22:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:36.499 13:22:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:36.499 13:22:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:36.499 13:22:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:36.499 13:22:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:36.499 13:22:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:36.499 13:22:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:36.499 13:22:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:36.499 13:22:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:36.499 13:22:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:36.499 13:22:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:36.499 13:22:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:36.499 13:22:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.499 13:22:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:36.499 13:22:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.499 13:22:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:36.499 "name": "raid_bdev1", 00:12:36.499 "uuid": "eee7bfc0-df5d-4c6e-9ee4-6c97da777319", 00:12:36.499 "strip_size_kb": 0, 00:12:36.499 "state": "online", 00:12:36.499 "raid_level": "raid1", 00:12:36.499 "superblock": false, 00:12:36.499 "num_base_bdevs": 2, 00:12:36.499 "num_base_bdevs_discovered": 2, 00:12:36.499 "num_base_bdevs_operational": 2, 00:12:36.499 "base_bdevs_list": [ 00:12:36.499 { 00:12:36.499 "name": "BaseBdev1", 00:12:36.499 "uuid": "71ec4d62-ad65-55be-866e-48ad00e159b3", 00:12:36.499 "is_configured": true, 00:12:36.499 "data_offset": 0, 00:12:36.499 "data_size": 65536 00:12:36.499 }, 00:12:36.499 { 00:12:36.499 "name": "BaseBdev2", 00:12:36.499 "uuid": "c731fabc-f2a7-5d16-ba5f-24b5919277e6", 00:12:36.499 "is_configured": true, 00:12:36.499 "data_offset": 0, 00:12:36.499 "data_size": 65536 00:12:36.499 } 00:12:36.499 ] 00:12:36.499 }' 00:12:36.499 13:22:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:36.499 13:22:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:37.069 13:22:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:37.069 13:22:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:37.069 13:22:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.069 13:22:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:37.069 [2024-11-17 13:22:26.030382] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:37.069 13:22:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.069 13:22:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:12:37.069 13:22:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:37.069 13:22:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:37.069 13:22:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.069 13:22:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:37.069 13:22:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.069 13:22:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:12:37.069 13:22:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:12:37.069 13:22:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:37.069 13:22:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:37.069 13:22:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.069 13:22:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:37.069 [2024-11-17 13:22:26.109937] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:37.069 13:22:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.069 13:22:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:37.069 13:22:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:37.069 13:22:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:37.069 13:22:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:37.069 13:22:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:37.069 13:22:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:37.069 13:22:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:37.069 13:22:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:37.069 13:22:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:37.069 13:22:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:37.069 13:22:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:37.069 13:22:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.069 13:22:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:37.069 13:22:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:37.069 13:22:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.069 13:22:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:37.069 "name": "raid_bdev1", 00:12:37.069 "uuid": "eee7bfc0-df5d-4c6e-9ee4-6c97da777319", 00:12:37.069 "strip_size_kb": 0, 00:12:37.069 "state": "online", 00:12:37.069 "raid_level": "raid1", 00:12:37.069 "superblock": false, 00:12:37.069 "num_base_bdevs": 2, 00:12:37.069 "num_base_bdevs_discovered": 1, 00:12:37.069 "num_base_bdevs_operational": 1, 00:12:37.069 "base_bdevs_list": [ 00:12:37.069 { 00:12:37.069 "name": null, 00:12:37.069 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:37.069 "is_configured": false, 00:12:37.069 "data_offset": 0, 00:12:37.069 "data_size": 65536 00:12:37.069 }, 00:12:37.069 { 00:12:37.069 "name": "BaseBdev2", 00:12:37.069 "uuid": "c731fabc-f2a7-5d16-ba5f-24b5919277e6", 00:12:37.069 "is_configured": true, 00:12:37.069 "data_offset": 0, 00:12:37.069 "data_size": 65536 00:12:37.069 } 00:12:37.069 ] 00:12:37.069 }' 00:12:37.070 13:22:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:37.070 13:22:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:37.070 [2024-11-17 13:22:26.208467] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:12:37.070 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:37.070 Zero copy mechanism will not be used. 00:12:37.070 Running I/O for 60 seconds... 00:12:37.331 13:22:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:37.331 13:22:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.331 13:22:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:37.592 [2024-11-17 13:22:26.560712] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:37.592 13:22:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.592 13:22:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:37.592 [2024-11-17 13:22:26.621367] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:12:37.592 [2024-11-17 13:22:26.623487] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:37.592 [2024-11-17 13:22:26.736681] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:37.592 [2024-11-17 13:22:26.737340] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:37.852 [2024-11-17 13:22:26.951825] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:37.852 [2024-11-17 13:22:26.952294] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:38.112 197.00 IOPS, 591.00 MiB/s [2024-11-17T13:22:27.336Z] [2024-11-17 13:22:27.280258] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:12:38.372 [2024-11-17 13:22:27.488293] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:38.372 [2024-11-17 13:22:27.488662] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:38.632 13:22:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:38.632 13:22:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:38.632 13:22:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:38.632 13:22:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:38.632 13:22:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:38.632 13:22:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:38.632 13:22:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:38.632 13:22:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.632 13:22:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:38.632 13:22:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.632 13:22:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:38.632 "name": "raid_bdev1", 00:12:38.632 "uuid": "eee7bfc0-df5d-4c6e-9ee4-6c97da777319", 00:12:38.632 "strip_size_kb": 0, 00:12:38.632 "state": "online", 00:12:38.632 "raid_level": "raid1", 00:12:38.632 "superblock": false, 00:12:38.632 "num_base_bdevs": 2, 00:12:38.632 "num_base_bdevs_discovered": 2, 00:12:38.632 "num_base_bdevs_operational": 2, 00:12:38.632 "process": { 00:12:38.632 "type": "rebuild", 00:12:38.632 "target": "spare", 00:12:38.632 "progress": { 00:12:38.632 "blocks": 10240, 00:12:38.632 "percent": 15 00:12:38.632 } 00:12:38.632 }, 00:12:38.632 "base_bdevs_list": [ 00:12:38.632 { 00:12:38.632 "name": "spare", 00:12:38.632 "uuid": "8eb55c72-cc2e-5968-a9ed-bc35f663dabc", 00:12:38.632 "is_configured": true, 00:12:38.632 "data_offset": 0, 00:12:38.632 "data_size": 65536 00:12:38.632 }, 00:12:38.632 { 00:12:38.632 "name": "BaseBdev2", 00:12:38.632 "uuid": "c731fabc-f2a7-5d16-ba5f-24b5919277e6", 00:12:38.632 "is_configured": true, 00:12:38.632 "data_offset": 0, 00:12:38.632 "data_size": 65536 00:12:38.632 } 00:12:38.632 ] 00:12:38.632 }' 00:12:38.632 13:22:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:38.632 13:22:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:38.632 13:22:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:38.632 [2024-11-17 13:22:27.728897] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:12:38.632 [2024-11-17 13:22:27.729458] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:12:38.632 13:22:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:38.632 13:22:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:38.632 13:22:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.632 13:22:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:38.632 [2024-11-17 13:22:27.758693] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:38.632 [2024-11-17 13:22:27.842195] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:12:38.632 [2024-11-17 13:22:27.842569] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:12:38.632 [2024-11-17 13:22:27.843803] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:38.632 [2024-11-17 13:22:27.850962] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:38.632 [2024-11-17 13:22:27.850998] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:38.632 [2024-11-17 13:22:27.851008] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:38.893 [2024-11-17 13:22:27.891648] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:12:38.893 13:22:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.893 13:22:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:38.893 13:22:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:38.893 13:22:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:38.893 13:22:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:38.893 13:22:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:38.893 13:22:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:38.893 13:22:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:38.893 13:22:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:38.893 13:22:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:38.893 13:22:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:38.893 13:22:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:38.893 13:22:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:38.893 13:22:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.893 13:22:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:38.893 13:22:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.893 13:22:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:38.893 "name": "raid_bdev1", 00:12:38.893 "uuid": "eee7bfc0-df5d-4c6e-9ee4-6c97da777319", 00:12:38.893 "strip_size_kb": 0, 00:12:38.893 "state": "online", 00:12:38.893 "raid_level": "raid1", 00:12:38.893 "superblock": false, 00:12:38.893 "num_base_bdevs": 2, 00:12:38.893 "num_base_bdevs_discovered": 1, 00:12:38.893 "num_base_bdevs_operational": 1, 00:12:38.893 "base_bdevs_list": [ 00:12:38.893 { 00:12:38.893 "name": null, 00:12:38.893 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:38.893 "is_configured": false, 00:12:38.893 "data_offset": 0, 00:12:38.893 "data_size": 65536 00:12:38.893 }, 00:12:38.893 { 00:12:38.893 "name": "BaseBdev2", 00:12:38.893 "uuid": "c731fabc-f2a7-5d16-ba5f-24b5919277e6", 00:12:38.893 "is_configured": true, 00:12:38.893 "data_offset": 0, 00:12:38.893 "data_size": 65536 00:12:38.893 } 00:12:38.893 ] 00:12:38.893 }' 00:12:38.893 13:22:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:38.893 13:22:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:39.153 173.00 IOPS, 519.00 MiB/s [2024-11-17T13:22:28.377Z] 13:22:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:39.413 13:22:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:39.413 13:22:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:39.413 13:22:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:39.413 13:22:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:39.413 13:22:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:39.413 13:22:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:39.413 13:22:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.413 13:22:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:39.413 13:22:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.413 13:22:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:39.413 "name": "raid_bdev1", 00:12:39.413 "uuid": "eee7bfc0-df5d-4c6e-9ee4-6c97da777319", 00:12:39.413 "strip_size_kb": 0, 00:12:39.413 "state": "online", 00:12:39.413 "raid_level": "raid1", 00:12:39.413 "superblock": false, 00:12:39.413 "num_base_bdevs": 2, 00:12:39.413 "num_base_bdevs_discovered": 1, 00:12:39.413 "num_base_bdevs_operational": 1, 00:12:39.413 "base_bdevs_list": [ 00:12:39.413 { 00:12:39.413 "name": null, 00:12:39.413 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:39.413 "is_configured": false, 00:12:39.413 "data_offset": 0, 00:12:39.413 "data_size": 65536 00:12:39.413 }, 00:12:39.413 { 00:12:39.413 "name": "BaseBdev2", 00:12:39.413 "uuid": "c731fabc-f2a7-5d16-ba5f-24b5919277e6", 00:12:39.413 "is_configured": true, 00:12:39.413 "data_offset": 0, 00:12:39.413 "data_size": 65536 00:12:39.413 } 00:12:39.413 ] 00:12:39.413 }' 00:12:39.413 13:22:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:39.413 13:22:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:39.413 13:22:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:39.413 13:22:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:39.413 13:22:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:39.413 13:22:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.413 13:22:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:39.413 [2024-11-17 13:22:28.500456] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:39.413 13:22:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.413 13:22:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:39.413 [2024-11-17 13:22:28.554042] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:12:39.413 [2024-11-17 13:22:28.555898] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:39.673 [2024-11-17 13:22:28.679288] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:39.673 [2024-11-17 13:22:28.679804] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:39.673 [2024-11-17 13:22:28.887838] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:39.673 [2024-11-17 13:22:28.888098] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:40.242 167.00 IOPS, 501.00 MiB/s [2024-11-17T13:22:29.466Z] [2024-11-17 13:22:29.326264] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:40.242 [2024-11-17 13:22:29.326579] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:40.501 13:22:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:40.501 13:22:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:40.501 13:22:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:40.501 13:22:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:40.501 13:22:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:40.501 13:22:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:40.502 13:22:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:40.502 13:22:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.502 13:22:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:40.502 13:22:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.502 13:22:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:40.502 "name": "raid_bdev1", 00:12:40.502 "uuid": "eee7bfc0-df5d-4c6e-9ee4-6c97da777319", 00:12:40.502 "strip_size_kb": 0, 00:12:40.502 "state": "online", 00:12:40.502 "raid_level": "raid1", 00:12:40.502 "superblock": false, 00:12:40.502 "num_base_bdevs": 2, 00:12:40.502 "num_base_bdevs_discovered": 2, 00:12:40.502 "num_base_bdevs_operational": 2, 00:12:40.502 "process": { 00:12:40.502 "type": "rebuild", 00:12:40.502 "target": "spare", 00:12:40.502 "progress": { 00:12:40.502 "blocks": 12288, 00:12:40.502 "percent": 18 00:12:40.502 } 00:12:40.502 }, 00:12:40.502 "base_bdevs_list": [ 00:12:40.502 { 00:12:40.502 "name": "spare", 00:12:40.502 "uuid": "8eb55c72-cc2e-5968-a9ed-bc35f663dabc", 00:12:40.502 "is_configured": true, 00:12:40.502 "data_offset": 0, 00:12:40.502 "data_size": 65536 00:12:40.502 }, 00:12:40.502 { 00:12:40.502 "name": "BaseBdev2", 00:12:40.502 "uuid": "c731fabc-f2a7-5d16-ba5f-24b5919277e6", 00:12:40.502 "is_configured": true, 00:12:40.502 "data_offset": 0, 00:12:40.502 "data_size": 65536 00:12:40.502 } 00:12:40.502 ] 00:12:40.502 }' 00:12:40.502 13:22:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:40.502 13:22:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:40.502 13:22:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:40.502 [2024-11-17 13:22:29.662415] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:12:40.502 13:22:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:40.502 13:22:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:12:40.502 13:22:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:12:40.502 13:22:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:12:40.502 13:22:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:12:40.502 13:22:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=399 00:12:40.502 13:22:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:40.502 13:22:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:40.502 13:22:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:40.502 13:22:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:40.502 13:22:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:40.502 13:22:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:40.502 13:22:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:40.502 13:22:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.502 13:22:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:40.502 13:22:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:40.502 13:22:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.502 13:22:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:40.502 "name": "raid_bdev1", 00:12:40.502 "uuid": "eee7bfc0-df5d-4c6e-9ee4-6c97da777319", 00:12:40.502 "strip_size_kb": 0, 00:12:40.502 "state": "online", 00:12:40.502 "raid_level": "raid1", 00:12:40.502 "superblock": false, 00:12:40.502 "num_base_bdevs": 2, 00:12:40.502 "num_base_bdevs_discovered": 2, 00:12:40.502 "num_base_bdevs_operational": 2, 00:12:40.502 "process": { 00:12:40.502 "type": "rebuild", 00:12:40.502 "target": "spare", 00:12:40.502 "progress": { 00:12:40.502 "blocks": 14336, 00:12:40.502 "percent": 21 00:12:40.502 } 00:12:40.502 }, 00:12:40.502 "base_bdevs_list": [ 00:12:40.502 { 00:12:40.502 "name": "spare", 00:12:40.502 "uuid": "8eb55c72-cc2e-5968-a9ed-bc35f663dabc", 00:12:40.502 "is_configured": true, 00:12:40.502 "data_offset": 0, 00:12:40.502 "data_size": 65536 00:12:40.502 }, 00:12:40.502 { 00:12:40.502 "name": "BaseBdev2", 00:12:40.502 "uuid": "c731fabc-f2a7-5d16-ba5f-24b5919277e6", 00:12:40.502 "is_configured": true, 00:12:40.502 "data_offset": 0, 00:12:40.502 "data_size": 65536 00:12:40.502 } 00:12:40.502 ] 00:12:40.502 }' 00:12:40.502 13:22:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:40.761 13:22:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:40.761 13:22:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:40.761 13:22:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:40.761 13:22:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:40.761 [2024-11-17 13:22:29.882144] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:12:41.021 [2024-11-17 13:22:30.126157] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:12:41.021 [2024-11-17 13:22:30.126898] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:12:41.281 140.50 IOPS, 421.50 MiB/s [2024-11-17T13:22:30.505Z] [2024-11-17 13:22:30.336091] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:12:41.541 [2024-11-17 13:22:30.671547] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:12:41.802 13:22:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:41.802 13:22:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:41.802 13:22:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:41.802 13:22:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:41.802 13:22:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:41.802 13:22:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:41.802 13:22:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:41.802 13:22:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.802 13:22:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:41.802 13:22:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:41.802 13:22:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.802 13:22:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:41.802 "name": "raid_bdev1", 00:12:41.802 "uuid": "eee7bfc0-df5d-4c6e-9ee4-6c97da777319", 00:12:41.802 "strip_size_kb": 0, 00:12:41.802 "state": "online", 00:12:41.802 "raid_level": "raid1", 00:12:41.802 "superblock": false, 00:12:41.802 "num_base_bdevs": 2, 00:12:41.802 "num_base_bdevs_discovered": 2, 00:12:41.802 "num_base_bdevs_operational": 2, 00:12:41.802 "process": { 00:12:41.802 "type": "rebuild", 00:12:41.802 "target": "spare", 00:12:41.802 "progress": { 00:12:41.802 "blocks": 26624, 00:12:41.802 "percent": 40 00:12:41.802 } 00:12:41.802 }, 00:12:41.802 "base_bdevs_list": [ 00:12:41.802 { 00:12:41.802 "name": "spare", 00:12:41.802 "uuid": "8eb55c72-cc2e-5968-a9ed-bc35f663dabc", 00:12:41.802 "is_configured": true, 00:12:41.802 "data_offset": 0, 00:12:41.802 "data_size": 65536 00:12:41.802 }, 00:12:41.802 { 00:12:41.802 "name": "BaseBdev2", 00:12:41.802 "uuid": "c731fabc-f2a7-5d16-ba5f-24b5919277e6", 00:12:41.802 "is_configured": true, 00:12:41.802 "data_offset": 0, 00:12:41.802 "data_size": 65536 00:12:41.802 } 00:12:41.802 ] 00:12:41.802 }' 00:12:41.802 13:22:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:41.802 13:22:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:41.802 13:22:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:41.802 [2024-11-17 13:22:30.880906] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:12:41.802 13:22:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:41.802 13:22:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:42.062 [2024-11-17 13:22:31.197111] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:12:42.321 123.20 IOPS, 369.60 MiB/s [2024-11-17T13:22:31.545Z] [2024-11-17 13:22:31.298737] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:12:42.321 [2024-11-17 13:22:31.521938] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:12:42.580 [2024-11-17 13:22:31.737133] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:12:42.840 13:22:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:42.840 13:22:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:42.840 13:22:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:42.840 13:22:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:42.840 13:22:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:42.840 13:22:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:42.840 13:22:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:42.840 13:22:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.840 13:22:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:42.840 13:22:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:42.840 13:22:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.840 13:22:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:42.840 "name": "raid_bdev1", 00:12:42.840 "uuid": "eee7bfc0-df5d-4c6e-9ee4-6c97da777319", 00:12:42.840 "strip_size_kb": 0, 00:12:42.840 "state": "online", 00:12:42.840 "raid_level": "raid1", 00:12:42.840 "superblock": false, 00:12:42.840 "num_base_bdevs": 2, 00:12:42.840 "num_base_bdevs_discovered": 2, 00:12:42.840 "num_base_bdevs_operational": 2, 00:12:42.840 "process": { 00:12:42.840 "type": "rebuild", 00:12:42.840 "target": "spare", 00:12:42.840 "progress": { 00:12:42.840 "blocks": 43008, 00:12:42.840 "percent": 65 00:12:42.840 } 00:12:42.840 }, 00:12:42.840 "base_bdevs_list": [ 00:12:42.840 { 00:12:42.840 "name": "spare", 00:12:42.840 "uuid": "8eb55c72-cc2e-5968-a9ed-bc35f663dabc", 00:12:42.840 "is_configured": true, 00:12:42.840 "data_offset": 0, 00:12:42.840 "data_size": 65536 00:12:42.840 }, 00:12:42.840 { 00:12:42.840 "name": "BaseBdev2", 00:12:42.840 "uuid": "c731fabc-f2a7-5d16-ba5f-24b5919277e6", 00:12:42.840 "is_configured": true, 00:12:42.840 "data_offset": 0, 00:12:42.840 "data_size": 65536 00:12:42.840 } 00:12:42.840 ] 00:12:42.840 }' 00:12:42.840 13:22:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:42.840 13:22:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:42.840 13:22:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:42.840 13:22:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:42.840 13:22:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:43.099 109.50 IOPS, 328.50 MiB/s [2024-11-17T13:22:32.323Z] [2024-11-17 13:22:32.314980] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:12:43.359 [2024-11-17 13:22:32.427403] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:12:43.617 [2024-11-17 13:22:32.758057] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:12:43.875 [2024-11-17 13:22:32.965695] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:12:43.875 [2024-11-17 13:22:32.966023] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:12:43.875 13:22:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:43.875 13:22:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:43.875 13:22:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:43.875 13:22:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:43.875 13:22:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:43.875 13:22:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:43.875 13:22:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:43.875 13:22:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.875 13:22:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:43.875 13:22:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:43.875 13:22:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.134 13:22:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:44.134 "name": "raid_bdev1", 00:12:44.134 "uuid": "eee7bfc0-df5d-4c6e-9ee4-6c97da777319", 00:12:44.134 "strip_size_kb": 0, 00:12:44.134 "state": "online", 00:12:44.134 "raid_level": "raid1", 00:12:44.134 "superblock": false, 00:12:44.134 "num_base_bdevs": 2, 00:12:44.134 "num_base_bdevs_discovered": 2, 00:12:44.134 "num_base_bdevs_operational": 2, 00:12:44.134 "process": { 00:12:44.134 "type": "rebuild", 00:12:44.134 "target": "spare", 00:12:44.134 "progress": { 00:12:44.134 "blocks": 59392, 00:12:44.134 "percent": 90 00:12:44.134 } 00:12:44.134 }, 00:12:44.134 "base_bdevs_list": [ 00:12:44.134 { 00:12:44.134 "name": "spare", 00:12:44.134 "uuid": "8eb55c72-cc2e-5968-a9ed-bc35f663dabc", 00:12:44.134 "is_configured": true, 00:12:44.134 "data_offset": 0, 00:12:44.134 "data_size": 65536 00:12:44.134 }, 00:12:44.134 { 00:12:44.134 "name": "BaseBdev2", 00:12:44.134 "uuid": "c731fabc-f2a7-5d16-ba5f-24b5919277e6", 00:12:44.134 "is_configured": true, 00:12:44.134 "data_offset": 0, 00:12:44.134 "data_size": 65536 00:12:44.134 } 00:12:44.134 ] 00:12:44.134 }' 00:12:44.134 13:22:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:44.134 13:22:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:44.134 13:22:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:44.134 13:22:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:44.134 13:22:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:44.393 98.71 IOPS, 296.14 MiB/s [2024-11-17T13:22:33.617Z] [2024-11-17 13:22:33.409485] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:12:44.393 [2024-11-17 13:22:33.509337] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:12:44.393 [2024-11-17 13:22:33.511774] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:45.329 13:22:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:45.329 13:22:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:45.329 13:22:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:45.329 13:22:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:45.329 13:22:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:45.329 13:22:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:45.329 13:22:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:45.329 13:22:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:45.329 13:22:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.329 13:22:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:45.329 90.62 IOPS, 271.88 MiB/s [2024-11-17T13:22:34.553Z] 13:22:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.329 13:22:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:45.329 "name": "raid_bdev1", 00:12:45.329 "uuid": "eee7bfc0-df5d-4c6e-9ee4-6c97da777319", 00:12:45.329 "strip_size_kb": 0, 00:12:45.329 "state": "online", 00:12:45.329 "raid_level": "raid1", 00:12:45.329 "superblock": false, 00:12:45.329 "num_base_bdevs": 2, 00:12:45.329 "num_base_bdevs_discovered": 2, 00:12:45.329 "num_base_bdevs_operational": 2, 00:12:45.329 "base_bdevs_list": [ 00:12:45.329 { 00:12:45.329 "name": "spare", 00:12:45.329 "uuid": "8eb55c72-cc2e-5968-a9ed-bc35f663dabc", 00:12:45.329 "is_configured": true, 00:12:45.329 "data_offset": 0, 00:12:45.329 "data_size": 65536 00:12:45.329 }, 00:12:45.329 { 00:12:45.329 "name": "BaseBdev2", 00:12:45.330 "uuid": "c731fabc-f2a7-5d16-ba5f-24b5919277e6", 00:12:45.330 "is_configured": true, 00:12:45.330 "data_offset": 0, 00:12:45.330 "data_size": 65536 00:12:45.330 } 00:12:45.330 ] 00:12:45.330 }' 00:12:45.330 13:22:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:45.330 13:22:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:12:45.330 13:22:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:45.330 13:22:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:12:45.330 13:22:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:12:45.330 13:22:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:45.330 13:22:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:45.330 13:22:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:45.330 13:22:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:45.330 13:22:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:45.330 13:22:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:45.330 13:22:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:45.330 13:22:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.330 13:22:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:45.330 13:22:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.330 13:22:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:45.330 "name": "raid_bdev1", 00:12:45.330 "uuid": "eee7bfc0-df5d-4c6e-9ee4-6c97da777319", 00:12:45.330 "strip_size_kb": 0, 00:12:45.330 "state": "online", 00:12:45.330 "raid_level": "raid1", 00:12:45.330 "superblock": false, 00:12:45.330 "num_base_bdevs": 2, 00:12:45.330 "num_base_bdevs_discovered": 2, 00:12:45.330 "num_base_bdevs_operational": 2, 00:12:45.330 "base_bdevs_list": [ 00:12:45.330 { 00:12:45.330 "name": "spare", 00:12:45.330 "uuid": "8eb55c72-cc2e-5968-a9ed-bc35f663dabc", 00:12:45.330 "is_configured": true, 00:12:45.330 "data_offset": 0, 00:12:45.330 "data_size": 65536 00:12:45.330 }, 00:12:45.330 { 00:12:45.330 "name": "BaseBdev2", 00:12:45.330 "uuid": "c731fabc-f2a7-5d16-ba5f-24b5919277e6", 00:12:45.330 "is_configured": true, 00:12:45.330 "data_offset": 0, 00:12:45.330 "data_size": 65536 00:12:45.330 } 00:12:45.330 ] 00:12:45.330 }' 00:12:45.330 13:22:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:45.330 13:22:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:45.330 13:22:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:45.330 13:22:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:45.330 13:22:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:45.330 13:22:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:45.330 13:22:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:45.330 13:22:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:45.330 13:22:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:45.330 13:22:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:45.330 13:22:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:45.330 13:22:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:45.330 13:22:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:45.330 13:22:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:45.330 13:22:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:45.330 13:22:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.330 13:22:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:45.330 13:22:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:45.330 13:22:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.330 13:22:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:45.330 "name": "raid_bdev1", 00:12:45.330 "uuid": "eee7bfc0-df5d-4c6e-9ee4-6c97da777319", 00:12:45.330 "strip_size_kb": 0, 00:12:45.330 "state": "online", 00:12:45.330 "raid_level": "raid1", 00:12:45.330 "superblock": false, 00:12:45.330 "num_base_bdevs": 2, 00:12:45.330 "num_base_bdevs_discovered": 2, 00:12:45.330 "num_base_bdevs_operational": 2, 00:12:45.330 "base_bdevs_list": [ 00:12:45.330 { 00:12:45.330 "name": "spare", 00:12:45.330 "uuid": "8eb55c72-cc2e-5968-a9ed-bc35f663dabc", 00:12:45.330 "is_configured": true, 00:12:45.330 "data_offset": 0, 00:12:45.330 "data_size": 65536 00:12:45.330 }, 00:12:45.330 { 00:12:45.330 "name": "BaseBdev2", 00:12:45.330 "uuid": "c731fabc-f2a7-5d16-ba5f-24b5919277e6", 00:12:45.330 "is_configured": true, 00:12:45.330 "data_offset": 0, 00:12:45.330 "data_size": 65536 00:12:45.330 } 00:12:45.330 ] 00:12:45.330 }' 00:12:45.330 13:22:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:45.330 13:22:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:45.898 13:22:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:45.898 13:22:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.898 13:22:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:45.898 [2024-11-17 13:22:34.899406] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:45.898 [2024-11-17 13:22:34.899439] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:45.898 00:12:45.898 Latency(us) 00:12:45.898 [2024-11-17T13:22:35.122Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:45.899 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:12:45.899 raid_bdev1 : 8.75 86.33 258.99 0.00 0.00 14855.34 289.76 109436.53 00:12:45.899 [2024-11-17T13:22:35.123Z] =================================================================================================================== 00:12:45.899 [2024-11-17T13:22:35.123Z] Total : 86.33 258.99 0.00 0.00 14855.34 289.76 109436.53 00:12:45.899 [2024-11-17 13:22:34.964205] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:45.899 [2024-11-17 13:22:34.964349] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:45.899 [2024-11-17 13:22:34.964492] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:45.899 [2024-11-17 13:22:34.964562] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:45.899 { 00:12:45.899 "results": [ 00:12:45.899 { 00:12:45.899 "job": "raid_bdev1", 00:12:45.899 "core_mask": "0x1", 00:12:45.899 "workload": "randrw", 00:12:45.899 "percentage": 50, 00:12:45.899 "status": "finished", 00:12:45.899 "queue_depth": 2, 00:12:45.899 "io_size": 3145728, 00:12:45.899 "runtime": 8.745593, 00:12:45.899 "iops": 86.3291946012123, 00:12:45.899 "mibps": 258.9875838036369, 00:12:45.899 "io_failed": 0, 00:12:45.899 "io_timeout": 0, 00:12:45.899 "avg_latency_us": 14855.336411116576, 00:12:45.899 "min_latency_us": 289.7606986899563, 00:12:45.899 "max_latency_us": 109436.5344978166 00:12:45.899 } 00:12:45.899 ], 00:12:45.899 "core_count": 1 00:12:45.899 } 00:12:45.899 13:22:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.899 13:22:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:45.899 13:22:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:12:45.899 13:22:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.899 13:22:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:45.899 13:22:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.899 13:22:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:12:45.899 13:22:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:12:45.899 13:22:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:12:45.899 13:22:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:12:45.899 13:22:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:45.899 13:22:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:12:45.899 13:22:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:45.899 13:22:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:45.899 13:22:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:45.899 13:22:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:12:45.899 13:22:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:45.899 13:22:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:45.899 13:22:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:12:46.165 /dev/nbd0 00:12:46.165 13:22:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:46.165 13:22:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:46.165 13:22:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:46.165 13:22:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:12:46.165 13:22:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:46.165 13:22:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:46.165 13:22:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:46.165 13:22:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:12:46.165 13:22:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:46.165 13:22:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:46.165 13:22:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:46.165 1+0 records in 00:12:46.165 1+0 records out 00:12:46.165 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000336571 s, 12.2 MB/s 00:12:46.165 13:22:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:46.165 13:22:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:12:46.165 13:22:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:46.166 13:22:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:46.166 13:22:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:12:46.166 13:22:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:46.166 13:22:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:46.166 13:22:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:12:46.166 13:22:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:12:46.166 13:22:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:12:46.166 13:22:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:46.166 13:22:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:12:46.166 13:22:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:46.166 13:22:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:12:46.166 13:22:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:46.166 13:22:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:12:46.166 13:22:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:46.166 13:22:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:46.166 13:22:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:12:46.436 /dev/nbd1 00:12:46.436 13:22:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:46.436 13:22:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:46.436 13:22:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:12:46.436 13:22:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:12:46.436 13:22:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:46.436 13:22:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:46.436 13:22:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:12:46.436 13:22:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:12:46.436 13:22:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:46.436 13:22:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:46.436 13:22:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:46.436 1+0 records in 00:12:46.436 1+0 records out 00:12:46.436 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000564879 s, 7.3 MB/s 00:12:46.436 13:22:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:46.436 13:22:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:12:46.436 13:22:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:46.436 13:22:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:46.436 13:22:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:12:46.436 13:22:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:46.436 13:22:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:46.436 13:22:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:12:46.695 13:22:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:12:46.695 13:22:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:46.695 13:22:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:12:46.695 13:22:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:46.695 13:22:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:12:46.695 13:22:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:46.695 13:22:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:46.955 13:22:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:46.955 13:22:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:46.955 13:22:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:46.955 13:22:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:46.955 13:22:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:46.955 13:22:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:46.955 13:22:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:12:46.955 13:22:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:12:46.955 13:22:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:46.955 13:22:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:46.955 13:22:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:46.955 13:22:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:46.955 13:22:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:12:46.955 13:22:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:46.955 13:22:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:46.955 13:22:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:46.955 13:22:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:46.955 13:22:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:46.955 13:22:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:46.955 13:22:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:46.955 13:22:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:46.955 13:22:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:12:46.955 13:22:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:12:46.955 13:22:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:12:46.955 13:22:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 76361 00:12:46.955 13:22:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # '[' -z 76361 ']' 00:12:46.956 13:22:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # kill -0 76361 00:12:46.956 13:22:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # uname 00:12:46.956 13:22:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:46.956 13:22:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76361 00:12:47.215 13:22:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:47.215 13:22:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:47.215 13:22:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76361' 00:12:47.215 killing process with pid 76361 00:12:47.215 13:22:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@973 -- # kill 76361 00:12:47.215 Received shutdown signal, test time was about 10.020935 seconds 00:12:47.215 00:12:47.215 Latency(us) 00:12:47.215 [2024-11-17T13:22:36.439Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:47.215 [2024-11-17T13:22:36.439Z] =================================================================================================================== 00:12:47.215 [2024-11-17T13:22:36.439Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:47.215 [2024-11-17 13:22:36.212201] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:47.215 13:22:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@978 -- # wait 76361 00:12:47.215 [2024-11-17 13:22:36.430768] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:48.595 13:22:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:12:48.595 00:12:48.595 real 0m13.106s 00:12:48.595 user 0m16.233s 00:12:48.595 sys 0m1.538s 00:12:48.595 13:22:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:48.595 13:22:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:48.595 ************************************ 00:12:48.595 END TEST raid_rebuild_test_io 00:12:48.595 ************************************ 00:12:48.595 13:22:37 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true true 00:12:48.595 13:22:37 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:12:48.595 13:22:37 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:48.595 13:22:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:48.595 ************************************ 00:12:48.595 START TEST raid_rebuild_test_sb_io 00:12:48.595 ************************************ 00:12:48.595 13:22:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true true true 00:12:48.595 13:22:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:48.595 13:22:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:12:48.595 13:22:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:12:48.595 13:22:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:12:48.595 13:22:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:48.595 13:22:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:48.595 13:22:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:48.595 13:22:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:48.595 13:22:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:48.595 13:22:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:48.595 13:22:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:48.595 13:22:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:48.595 13:22:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:48.595 13:22:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:48.595 13:22:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:48.595 13:22:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:48.595 13:22:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:48.595 13:22:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:48.595 13:22:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:48.595 13:22:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:48.595 13:22:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:48.595 13:22:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:48.595 13:22:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:12:48.595 13:22:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:12:48.595 13:22:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=76750 00:12:48.595 13:22:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:48.595 13:22:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 76750 00:12:48.596 13:22:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # '[' -z 76750 ']' 00:12:48.596 13:22:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:48.596 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:48.596 13:22:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:48.596 13:22:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:48.596 13:22:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:48.596 13:22:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:48.596 [2024-11-17 13:22:37.732709] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:12:48.596 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:48.596 Zero copy mechanism will not be used. 00:12:48.596 [2024-11-17 13:22:37.732874] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76750 ] 00:12:48.855 [2024-11-17 13:22:37.904639] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:48.855 [2024-11-17 13:22:38.016235] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:49.114 [2024-11-17 13:22:38.232122] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:49.114 [2024-11-17 13:22:38.232216] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:49.684 13:22:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:49.684 13:22:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # return 0 00:12:49.684 13:22:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:49.684 13:22:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:49.684 13:22:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.684 13:22:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:49.684 BaseBdev1_malloc 00:12:49.684 13:22:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.684 13:22:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:49.684 13:22:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.684 13:22:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:49.684 [2024-11-17 13:22:38.727586] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:49.684 [2024-11-17 13:22:38.727702] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:49.684 [2024-11-17 13:22:38.727757] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:49.684 [2024-11-17 13:22:38.727823] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:49.684 [2024-11-17 13:22:38.730063] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:49.684 [2024-11-17 13:22:38.730140] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:49.684 BaseBdev1 00:12:49.684 13:22:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.684 13:22:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:49.684 13:22:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:49.684 13:22:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.684 13:22:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:49.684 BaseBdev2_malloc 00:12:49.684 13:22:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.684 13:22:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:49.684 13:22:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.684 13:22:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:49.684 [2024-11-17 13:22:38.784343] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:49.684 [2024-11-17 13:22:38.784436] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:49.684 [2024-11-17 13:22:38.784458] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:49.684 [2024-11-17 13:22:38.784471] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:49.684 [2024-11-17 13:22:38.786613] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:49.684 [2024-11-17 13:22:38.786653] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:49.684 BaseBdev2 00:12:49.684 13:22:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.684 13:22:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:49.684 13:22:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.684 13:22:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:49.684 spare_malloc 00:12:49.684 13:22:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.684 13:22:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:49.684 13:22:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.684 13:22:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:49.684 spare_delay 00:12:49.684 13:22:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.684 13:22:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:49.684 13:22:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.684 13:22:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:49.684 [2024-11-17 13:22:38.864636] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:49.684 [2024-11-17 13:22:38.864692] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:49.684 [2024-11-17 13:22:38.864710] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:12:49.684 [2024-11-17 13:22:38.864720] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:49.684 [2024-11-17 13:22:38.866785] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:49.684 [2024-11-17 13:22:38.866868] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:49.684 spare 00:12:49.684 13:22:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.684 13:22:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:12:49.684 13:22:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.684 13:22:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:49.684 [2024-11-17 13:22:38.876694] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:49.684 [2024-11-17 13:22:38.878517] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:49.684 [2024-11-17 13:22:38.878743] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:49.684 [2024-11-17 13:22:38.878761] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:49.684 [2024-11-17 13:22:38.879008] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:49.684 [2024-11-17 13:22:38.879178] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:49.684 [2024-11-17 13:22:38.879189] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:49.684 [2024-11-17 13:22:38.879358] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:49.684 13:22:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.685 13:22:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:49.685 13:22:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:49.685 13:22:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:49.685 13:22:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:49.685 13:22:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:49.685 13:22:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:49.685 13:22:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:49.685 13:22:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:49.685 13:22:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:49.685 13:22:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:49.685 13:22:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:49.685 13:22:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:49.685 13:22:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.685 13:22:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:49.685 13:22:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.943 13:22:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:49.943 "name": "raid_bdev1", 00:12:49.943 "uuid": "12d45c08-9b35-458d-83da-f43e7ad38658", 00:12:49.943 "strip_size_kb": 0, 00:12:49.943 "state": "online", 00:12:49.943 "raid_level": "raid1", 00:12:49.943 "superblock": true, 00:12:49.943 "num_base_bdevs": 2, 00:12:49.943 "num_base_bdevs_discovered": 2, 00:12:49.943 "num_base_bdevs_operational": 2, 00:12:49.943 "base_bdevs_list": [ 00:12:49.943 { 00:12:49.943 "name": "BaseBdev1", 00:12:49.943 "uuid": "d7e60fbf-eb91-53fe-8fbd-8e6cdde5511b", 00:12:49.943 "is_configured": true, 00:12:49.943 "data_offset": 2048, 00:12:49.943 "data_size": 63488 00:12:49.943 }, 00:12:49.943 { 00:12:49.943 "name": "BaseBdev2", 00:12:49.943 "uuid": "c5c5473c-b956-51d9-8911-ee13e93a776a", 00:12:49.943 "is_configured": true, 00:12:49.943 "data_offset": 2048, 00:12:49.943 "data_size": 63488 00:12:49.943 } 00:12:49.943 ] 00:12:49.943 }' 00:12:49.943 13:22:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:49.943 13:22:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:50.201 13:22:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:50.201 13:22:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.201 13:22:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:50.201 13:22:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:50.201 [2024-11-17 13:22:39.356218] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:50.201 13:22:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.201 13:22:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:12:50.201 13:22:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.201 13:22:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:50.201 13:22:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.201 13:22:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:50.201 13:22:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.460 13:22:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:12:50.460 13:22:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:12:50.460 13:22:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:50.460 13:22:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:50.460 13:22:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.460 13:22:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:50.460 [2024-11-17 13:22:39.451694] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:50.460 13:22:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.460 13:22:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:50.460 13:22:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:50.460 13:22:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:50.460 13:22:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:50.460 13:22:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:50.460 13:22:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:50.460 13:22:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:50.460 13:22:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:50.460 13:22:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:50.460 13:22:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:50.460 13:22:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.460 13:22:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.460 13:22:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:50.460 13:22:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:50.460 13:22:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.460 13:22:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:50.460 "name": "raid_bdev1", 00:12:50.460 "uuid": "12d45c08-9b35-458d-83da-f43e7ad38658", 00:12:50.460 "strip_size_kb": 0, 00:12:50.460 "state": "online", 00:12:50.460 "raid_level": "raid1", 00:12:50.460 "superblock": true, 00:12:50.460 "num_base_bdevs": 2, 00:12:50.460 "num_base_bdevs_discovered": 1, 00:12:50.460 "num_base_bdevs_operational": 1, 00:12:50.460 "base_bdevs_list": [ 00:12:50.460 { 00:12:50.460 "name": null, 00:12:50.460 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:50.460 "is_configured": false, 00:12:50.460 "data_offset": 0, 00:12:50.460 "data_size": 63488 00:12:50.460 }, 00:12:50.460 { 00:12:50.460 "name": "BaseBdev2", 00:12:50.460 "uuid": "c5c5473c-b956-51d9-8911-ee13e93a776a", 00:12:50.460 "is_configured": true, 00:12:50.460 "data_offset": 2048, 00:12:50.460 "data_size": 63488 00:12:50.460 } 00:12:50.460 ] 00:12:50.460 }' 00:12:50.460 13:22:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:50.460 13:22:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:50.460 [2024-11-17 13:22:39.556917] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:12:50.460 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:50.460 Zero copy mechanism will not be used. 00:12:50.460 Running I/O for 60 seconds... 00:12:50.720 13:22:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:50.720 13:22:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.720 13:22:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:50.720 [2024-11-17 13:22:39.853849] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:50.720 13:22:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.720 13:22:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:50.720 [2024-11-17 13:22:39.897515] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:12:50.720 [2024-11-17 13:22:39.899846] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:50.979 [2024-11-17 13:22:40.026848] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:50.979 [2024-11-17 13:22:40.028022] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:51.239 [2024-11-17 13:22:40.234252] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:51.239 [2024-11-17 13:22:40.234962] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:51.499 147.00 IOPS, 441.00 MiB/s [2024-11-17T13:22:40.723Z] [2024-11-17 13:22:40.590766] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:12:51.499 [2024-11-17 13:22:40.596977] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:12:51.759 [2024-11-17 13:22:40.808563] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:51.759 [2024-11-17 13:22:40.809180] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:51.759 13:22:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:51.759 13:22:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:51.759 13:22:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:51.759 13:22:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:51.759 13:22:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:51.759 13:22:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:51.759 13:22:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:51.759 13:22:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.759 13:22:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:51.759 13:22:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.759 13:22:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:51.759 "name": "raid_bdev1", 00:12:51.759 "uuid": "12d45c08-9b35-458d-83da-f43e7ad38658", 00:12:51.759 "strip_size_kb": 0, 00:12:51.759 "state": "online", 00:12:51.759 "raid_level": "raid1", 00:12:51.759 "superblock": true, 00:12:51.759 "num_base_bdevs": 2, 00:12:51.759 "num_base_bdevs_discovered": 2, 00:12:51.759 "num_base_bdevs_operational": 2, 00:12:51.759 "process": { 00:12:51.759 "type": "rebuild", 00:12:51.759 "target": "spare", 00:12:51.759 "progress": { 00:12:51.759 "blocks": 10240, 00:12:51.759 "percent": 16 00:12:51.759 } 00:12:51.759 }, 00:12:51.759 "base_bdevs_list": [ 00:12:51.759 { 00:12:51.759 "name": "spare", 00:12:51.759 "uuid": "049c22f4-60fc-5aa3-b145-1277f76e6ef2", 00:12:51.759 "is_configured": true, 00:12:51.759 "data_offset": 2048, 00:12:51.759 "data_size": 63488 00:12:51.759 }, 00:12:51.759 { 00:12:51.759 "name": "BaseBdev2", 00:12:51.759 "uuid": "c5c5473c-b956-51d9-8911-ee13e93a776a", 00:12:51.759 "is_configured": true, 00:12:51.759 "data_offset": 2048, 00:12:51.759 "data_size": 63488 00:12:51.759 } 00:12:51.759 ] 00:12:51.759 }' 00:12:51.759 13:22:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:51.759 13:22:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:51.759 13:22:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:52.019 13:22:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:52.019 13:22:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:52.019 13:22:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.019 13:22:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:52.019 [2024-11-17 13:22:41.028928] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:52.019 [2024-11-17 13:22:41.048339] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:12:52.019 [2024-11-17 13:22:41.060597] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:52.019 [2024-11-17 13:22:41.069337] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:52.019 [2024-11-17 13:22:41.069451] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:52.019 [2024-11-17 13:22:41.069485] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:52.019 [2024-11-17 13:22:41.115476] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:12:52.019 13:22:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.019 13:22:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:52.019 13:22:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:52.019 13:22:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:52.019 13:22:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:52.019 13:22:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:52.019 13:22:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:52.019 13:22:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:52.019 13:22:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:52.019 13:22:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:52.019 13:22:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:52.019 13:22:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:52.019 13:22:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.019 13:22:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:52.019 13:22:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:52.019 13:22:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.019 13:22:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:52.019 "name": "raid_bdev1", 00:12:52.019 "uuid": "12d45c08-9b35-458d-83da-f43e7ad38658", 00:12:52.019 "strip_size_kb": 0, 00:12:52.019 "state": "online", 00:12:52.019 "raid_level": "raid1", 00:12:52.019 "superblock": true, 00:12:52.019 "num_base_bdevs": 2, 00:12:52.019 "num_base_bdevs_discovered": 1, 00:12:52.019 "num_base_bdevs_operational": 1, 00:12:52.019 "base_bdevs_list": [ 00:12:52.019 { 00:12:52.019 "name": null, 00:12:52.019 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:52.019 "is_configured": false, 00:12:52.019 "data_offset": 0, 00:12:52.019 "data_size": 63488 00:12:52.019 }, 00:12:52.019 { 00:12:52.019 "name": "BaseBdev2", 00:12:52.019 "uuid": "c5c5473c-b956-51d9-8911-ee13e93a776a", 00:12:52.019 "is_configured": true, 00:12:52.019 "data_offset": 2048, 00:12:52.019 "data_size": 63488 00:12:52.019 } 00:12:52.019 ] 00:12:52.019 }' 00:12:52.019 13:22:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:52.019 13:22:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:52.586 13:22:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:52.586 13:22:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:52.586 13:22:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:52.586 13:22:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:52.586 13:22:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:52.586 157.50 IOPS, 472.50 MiB/s [2024-11-17T13:22:41.810Z] 13:22:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:52.586 13:22:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:52.586 13:22:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.586 13:22:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:52.586 13:22:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.586 13:22:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:52.586 "name": "raid_bdev1", 00:12:52.586 "uuid": "12d45c08-9b35-458d-83da-f43e7ad38658", 00:12:52.586 "strip_size_kb": 0, 00:12:52.586 "state": "online", 00:12:52.586 "raid_level": "raid1", 00:12:52.586 "superblock": true, 00:12:52.586 "num_base_bdevs": 2, 00:12:52.586 "num_base_bdevs_discovered": 1, 00:12:52.586 "num_base_bdevs_operational": 1, 00:12:52.586 "base_bdevs_list": [ 00:12:52.586 { 00:12:52.586 "name": null, 00:12:52.586 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:52.586 "is_configured": false, 00:12:52.586 "data_offset": 0, 00:12:52.586 "data_size": 63488 00:12:52.586 }, 00:12:52.586 { 00:12:52.586 "name": "BaseBdev2", 00:12:52.586 "uuid": "c5c5473c-b956-51d9-8911-ee13e93a776a", 00:12:52.586 "is_configured": true, 00:12:52.586 "data_offset": 2048, 00:12:52.586 "data_size": 63488 00:12:52.586 } 00:12:52.586 ] 00:12:52.586 }' 00:12:52.586 13:22:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:52.586 13:22:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:52.586 13:22:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:52.586 13:22:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:52.586 13:22:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:52.586 13:22:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.586 13:22:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:52.586 [2024-11-17 13:22:41.710661] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:52.586 13:22:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.586 13:22:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:52.586 [2024-11-17 13:22:41.776331] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:12:52.586 [2024-11-17 13:22:41.778695] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:52.845 [2024-11-17 13:22:41.893936] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:52.845 [2024-11-17 13:22:41.894984] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:53.104 [2024-11-17 13:22:42.111948] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:53.104 [2024-11-17 13:22:42.112565] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:53.362 [2024-11-17 13:22:42.448284] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:12:53.362 142.33 IOPS, 427.00 MiB/s [2024-11-17T13:22:42.586Z] [2024-11-17 13:22:42.576345] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:53.621 13:22:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:53.621 13:22:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:53.621 13:22:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:53.621 13:22:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:53.621 13:22:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:53.621 13:22:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:53.621 13:22:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.621 13:22:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:53.621 13:22:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:53.621 13:22:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.621 13:22:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:53.621 "name": "raid_bdev1", 00:12:53.621 "uuid": "12d45c08-9b35-458d-83da-f43e7ad38658", 00:12:53.621 "strip_size_kb": 0, 00:12:53.621 "state": "online", 00:12:53.621 "raid_level": "raid1", 00:12:53.621 "superblock": true, 00:12:53.621 "num_base_bdevs": 2, 00:12:53.621 "num_base_bdevs_discovered": 2, 00:12:53.621 "num_base_bdevs_operational": 2, 00:12:53.621 "process": { 00:12:53.621 "type": "rebuild", 00:12:53.621 "target": "spare", 00:12:53.621 "progress": { 00:12:53.621 "blocks": 12288, 00:12:53.621 "percent": 19 00:12:53.621 } 00:12:53.621 }, 00:12:53.621 "base_bdevs_list": [ 00:12:53.621 { 00:12:53.621 "name": "spare", 00:12:53.621 "uuid": "049c22f4-60fc-5aa3-b145-1277f76e6ef2", 00:12:53.621 "is_configured": true, 00:12:53.621 "data_offset": 2048, 00:12:53.621 "data_size": 63488 00:12:53.621 }, 00:12:53.621 { 00:12:53.621 "name": "BaseBdev2", 00:12:53.621 "uuid": "c5c5473c-b956-51d9-8911-ee13e93a776a", 00:12:53.621 "is_configured": true, 00:12:53.621 "data_offset": 2048, 00:12:53.621 "data_size": 63488 00:12:53.621 } 00:12:53.621 ] 00:12:53.621 }' 00:12:53.621 13:22:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:53.621 13:22:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:53.621 13:22:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:53.880 13:22:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:53.880 13:22:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:12:53.880 13:22:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:12:53.880 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:12:53.880 13:22:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:12:53.880 13:22:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:12:53.880 13:22:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:12:53.880 13:22:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=412 00:12:53.880 13:22:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:53.880 13:22:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:53.880 13:22:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:53.880 13:22:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:53.880 13:22:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:53.880 13:22:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:53.880 13:22:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:53.880 13:22:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.880 13:22:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:53.880 13:22:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:53.880 13:22:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.880 13:22:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:53.880 "name": "raid_bdev1", 00:12:53.880 "uuid": "12d45c08-9b35-458d-83da-f43e7ad38658", 00:12:53.880 "strip_size_kb": 0, 00:12:53.880 "state": "online", 00:12:53.880 "raid_level": "raid1", 00:12:53.880 "superblock": true, 00:12:53.880 "num_base_bdevs": 2, 00:12:53.880 "num_base_bdevs_discovered": 2, 00:12:53.880 "num_base_bdevs_operational": 2, 00:12:53.880 "process": { 00:12:53.880 "type": "rebuild", 00:12:53.880 "target": "spare", 00:12:53.880 "progress": { 00:12:53.880 "blocks": 14336, 00:12:53.880 "percent": 22 00:12:53.880 } 00:12:53.880 }, 00:12:53.880 "base_bdevs_list": [ 00:12:53.880 { 00:12:53.880 "name": "spare", 00:12:53.880 "uuid": "049c22f4-60fc-5aa3-b145-1277f76e6ef2", 00:12:53.880 "is_configured": true, 00:12:53.880 "data_offset": 2048, 00:12:53.880 "data_size": 63488 00:12:53.880 }, 00:12:53.880 { 00:12:53.880 "name": "BaseBdev2", 00:12:53.880 "uuid": "c5c5473c-b956-51d9-8911-ee13e93a776a", 00:12:53.880 "is_configured": true, 00:12:53.880 "data_offset": 2048, 00:12:53.880 "data_size": 63488 00:12:53.880 } 00:12:53.880 ] 00:12:53.880 }' 00:12:53.880 13:22:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:53.880 13:22:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:53.880 13:22:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:53.880 13:22:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:53.880 13:22:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:54.139 [2024-11-17 13:22:43.297263] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:12:54.656 125.50 IOPS, 376.50 MiB/s [2024-11-17T13:22:43.881Z] [2024-11-17 13:22:43.784773] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:12:54.915 13:22:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:54.915 13:22:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:54.915 13:22:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:54.915 13:22:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:54.915 13:22:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:54.915 13:22:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:54.915 13:22:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:54.915 13:22:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:54.915 13:22:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.915 13:22:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:54.915 13:22:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.915 13:22:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:54.915 "name": "raid_bdev1", 00:12:54.915 "uuid": "12d45c08-9b35-458d-83da-f43e7ad38658", 00:12:54.915 "strip_size_kb": 0, 00:12:54.915 "state": "online", 00:12:54.915 "raid_level": "raid1", 00:12:54.915 "superblock": true, 00:12:54.915 "num_base_bdevs": 2, 00:12:54.915 "num_base_bdevs_discovered": 2, 00:12:54.915 "num_base_bdevs_operational": 2, 00:12:54.915 "process": { 00:12:54.915 "type": "rebuild", 00:12:54.915 "target": "spare", 00:12:54.915 "progress": { 00:12:54.915 "blocks": 30720, 00:12:54.915 "percent": 48 00:12:54.915 } 00:12:54.915 }, 00:12:54.915 "base_bdevs_list": [ 00:12:54.915 { 00:12:54.915 "name": "spare", 00:12:54.915 "uuid": "049c22f4-60fc-5aa3-b145-1277f76e6ef2", 00:12:54.915 "is_configured": true, 00:12:54.915 "data_offset": 2048, 00:12:54.915 "data_size": 63488 00:12:54.915 }, 00:12:54.915 { 00:12:54.915 "name": "BaseBdev2", 00:12:54.915 "uuid": "c5c5473c-b956-51d9-8911-ee13e93a776a", 00:12:54.915 "is_configured": true, 00:12:54.915 "data_offset": 2048, 00:12:54.915 "data_size": 63488 00:12:54.915 } 00:12:54.915 ] 00:12:54.915 }' 00:12:54.915 13:22:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:54.915 13:22:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:54.915 13:22:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:54.915 [2024-11-17 13:22:44.134132] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:12:55.173 13:22:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:55.173 13:22:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:55.173 [2024-11-17 13:22:44.356072] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:12:55.997 107.60 IOPS, 322.80 MiB/s [2024-11-17T13:22:45.221Z] 13:22:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:55.997 13:22:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:55.997 13:22:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:55.997 13:22:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:55.997 13:22:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:55.997 13:22:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:55.997 13:22:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:55.997 13:22:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.997 13:22:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:55.997 13:22:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:55.997 13:22:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.997 13:22:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:55.997 "name": "raid_bdev1", 00:12:55.997 "uuid": "12d45c08-9b35-458d-83da-f43e7ad38658", 00:12:55.997 "strip_size_kb": 0, 00:12:55.997 "state": "online", 00:12:55.997 "raid_level": "raid1", 00:12:55.997 "superblock": true, 00:12:55.997 "num_base_bdevs": 2, 00:12:55.997 "num_base_bdevs_discovered": 2, 00:12:55.997 "num_base_bdevs_operational": 2, 00:12:55.997 "process": { 00:12:55.997 "type": "rebuild", 00:12:55.997 "target": "spare", 00:12:55.997 "progress": { 00:12:55.997 "blocks": 47104, 00:12:55.997 "percent": 74 00:12:55.997 } 00:12:55.997 }, 00:12:55.997 "base_bdevs_list": [ 00:12:55.997 { 00:12:55.997 "name": "spare", 00:12:55.997 "uuid": "049c22f4-60fc-5aa3-b145-1277f76e6ef2", 00:12:55.997 "is_configured": true, 00:12:55.997 "data_offset": 2048, 00:12:55.997 "data_size": 63488 00:12:55.997 }, 00:12:55.997 { 00:12:55.997 "name": "BaseBdev2", 00:12:55.997 "uuid": "c5c5473c-b956-51d9-8911-ee13e93a776a", 00:12:55.997 "is_configured": true, 00:12:55.997 "data_offset": 2048, 00:12:55.997 "data_size": 63488 00:12:55.997 } 00:12:55.997 ] 00:12:55.997 }' 00:12:55.998 13:22:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:56.256 13:22:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:56.256 13:22:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:56.256 13:22:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:56.256 13:22:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:56.775 97.00 IOPS, 291.00 MiB/s [2024-11-17T13:22:45.999Z] [2024-11-17 13:22:45.769535] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:12:56.775 [2024-11-17 13:22:45.974240] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:12:57.034 [2024-11-17 13:22:46.013470] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:12:57.034 [2024-11-17 13:22:46.018567] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:57.294 13:22:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:57.294 13:22:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:57.294 13:22:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:57.294 13:22:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:57.294 13:22:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:57.294 13:22:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:57.294 13:22:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:57.294 13:22:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:57.294 13:22:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.294 13:22:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:57.294 13:22:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.294 13:22:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:57.294 "name": "raid_bdev1", 00:12:57.294 "uuid": "12d45c08-9b35-458d-83da-f43e7ad38658", 00:12:57.294 "strip_size_kb": 0, 00:12:57.294 "state": "online", 00:12:57.294 "raid_level": "raid1", 00:12:57.294 "superblock": true, 00:12:57.294 "num_base_bdevs": 2, 00:12:57.294 "num_base_bdevs_discovered": 2, 00:12:57.294 "num_base_bdevs_operational": 2, 00:12:57.294 "base_bdevs_list": [ 00:12:57.294 { 00:12:57.294 "name": "spare", 00:12:57.294 "uuid": "049c22f4-60fc-5aa3-b145-1277f76e6ef2", 00:12:57.294 "is_configured": true, 00:12:57.294 "data_offset": 2048, 00:12:57.294 "data_size": 63488 00:12:57.294 }, 00:12:57.294 { 00:12:57.294 "name": "BaseBdev2", 00:12:57.294 "uuid": "c5c5473c-b956-51d9-8911-ee13e93a776a", 00:12:57.294 "is_configured": true, 00:12:57.294 "data_offset": 2048, 00:12:57.294 "data_size": 63488 00:12:57.294 } 00:12:57.294 ] 00:12:57.294 }' 00:12:57.294 13:22:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:57.294 13:22:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:12:57.294 13:22:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:57.294 13:22:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:12:57.294 13:22:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:12:57.294 13:22:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:57.294 13:22:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:57.294 13:22:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:57.294 13:22:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:57.294 13:22:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:57.294 13:22:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:57.294 13:22:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:57.294 13:22:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.294 13:22:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:57.294 13:22:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.294 13:22:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:57.294 "name": "raid_bdev1", 00:12:57.294 "uuid": "12d45c08-9b35-458d-83da-f43e7ad38658", 00:12:57.294 "strip_size_kb": 0, 00:12:57.294 "state": "online", 00:12:57.294 "raid_level": "raid1", 00:12:57.294 "superblock": true, 00:12:57.294 "num_base_bdevs": 2, 00:12:57.294 "num_base_bdevs_discovered": 2, 00:12:57.294 "num_base_bdevs_operational": 2, 00:12:57.294 "base_bdevs_list": [ 00:12:57.294 { 00:12:57.294 "name": "spare", 00:12:57.294 "uuid": "049c22f4-60fc-5aa3-b145-1277f76e6ef2", 00:12:57.294 "is_configured": true, 00:12:57.294 "data_offset": 2048, 00:12:57.294 "data_size": 63488 00:12:57.294 }, 00:12:57.294 { 00:12:57.294 "name": "BaseBdev2", 00:12:57.294 "uuid": "c5c5473c-b956-51d9-8911-ee13e93a776a", 00:12:57.294 "is_configured": true, 00:12:57.294 "data_offset": 2048, 00:12:57.294 "data_size": 63488 00:12:57.294 } 00:12:57.294 ] 00:12:57.294 }' 00:12:57.294 13:22:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:57.555 13:22:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:57.555 13:22:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:57.555 89.57 IOPS, 268.71 MiB/s [2024-11-17T13:22:46.779Z] 13:22:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:57.555 13:22:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:57.555 13:22:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:57.555 13:22:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:57.555 13:22:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:57.555 13:22:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:57.555 13:22:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:57.555 13:22:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:57.555 13:22:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:57.555 13:22:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:57.555 13:22:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:57.555 13:22:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:57.555 13:22:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:57.555 13:22:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.555 13:22:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:57.555 13:22:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.555 13:22:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:57.555 "name": "raid_bdev1", 00:12:57.555 "uuid": "12d45c08-9b35-458d-83da-f43e7ad38658", 00:12:57.555 "strip_size_kb": 0, 00:12:57.555 "state": "online", 00:12:57.555 "raid_level": "raid1", 00:12:57.555 "superblock": true, 00:12:57.555 "num_base_bdevs": 2, 00:12:57.555 "num_base_bdevs_discovered": 2, 00:12:57.555 "num_base_bdevs_operational": 2, 00:12:57.555 "base_bdevs_list": [ 00:12:57.555 { 00:12:57.555 "name": "spare", 00:12:57.555 "uuid": "049c22f4-60fc-5aa3-b145-1277f76e6ef2", 00:12:57.555 "is_configured": true, 00:12:57.555 "data_offset": 2048, 00:12:57.555 "data_size": 63488 00:12:57.555 }, 00:12:57.555 { 00:12:57.555 "name": "BaseBdev2", 00:12:57.555 "uuid": "c5c5473c-b956-51d9-8911-ee13e93a776a", 00:12:57.555 "is_configured": true, 00:12:57.555 "data_offset": 2048, 00:12:57.555 "data_size": 63488 00:12:57.555 } 00:12:57.555 ] 00:12:57.555 }' 00:12:57.555 13:22:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:57.555 13:22:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:57.814 13:22:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:57.814 13:22:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.814 13:22:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:57.814 [2024-11-17 13:22:47.022395] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:57.814 [2024-11-17 13:22:47.022518] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:58.074 00:12:58.074 Latency(us) 00:12:58.074 [2024-11-17T13:22:47.298Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:58.074 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:12:58.074 raid_bdev1 : 7.56 85.67 257.00 0.00 0.00 16704.04 307.65 113557.58 00:12:58.074 [2024-11-17T13:22:47.298Z] =================================================================================================================== 00:12:58.074 [2024-11-17T13:22:47.298Z] Total : 85.67 257.00 0.00 0.00 16704.04 307.65 113557.58 00:12:58.074 [2024-11-17 13:22:47.133007] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:58.074 [2024-11-17 13:22:47.133073] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:58.074 [2024-11-17 13:22:47.133168] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:58.074 [2024-11-17 13:22:47.133180] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:58.074 { 00:12:58.074 "results": [ 00:12:58.074 { 00:12:58.074 "job": "raid_bdev1", 00:12:58.074 "core_mask": "0x1", 00:12:58.074 "workload": "randrw", 00:12:58.074 "percentage": 50, 00:12:58.074 "status": "finished", 00:12:58.074 "queue_depth": 2, 00:12:58.074 "io_size": 3145728, 00:12:58.074 "runtime": 7.564189, 00:12:58.074 "iops": 85.66681768527994, 00:12:58.074 "mibps": 257.0004530558398, 00:12:58.074 "io_failed": 0, 00:12:58.074 "io_timeout": 0, 00:12:58.074 "avg_latency_us": 16704.04071378511, 00:12:58.074 "min_latency_us": 307.6471615720524, 00:12:58.074 "max_latency_us": 113557.57554585153 00:12:58.074 } 00:12:58.074 ], 00:12:58.074 "core_count": 1 00:12:58.074 } 00:12:58.074 13:22:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.074 13:22:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:58.074 13:22:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.074 13:22:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:58.074 13:22:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:12:58.074 13:22:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.074 13:22:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:12:58.074 13:22:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:12:58.074 13:22:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:12:58.074 13:22:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:12:58.074 13:22:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:58.074 13:22:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:12:58.074 13:22:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:58.074 13:22:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:58.074 13:22:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:58.074 13:22:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:12:58.074 13:22:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:58.074 13:22:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:58.074 13:22:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:12:58.334 /dev/nbd0 00:12:58.334 13:22:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:58.334 13:22:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:58.334 13:22:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:58.334 13:22:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:12:58.334 13:22:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:58.334 13:22:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:58.334 13:22:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:58.334 13:22:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:12:58.334 13:22:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:58.334 13:22:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:58.334 13:22:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:58.334 1+0 records in 00:12:58.334 1+0 records out 00:12:58.334 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000271044 s, 15.1 MB/s 00:12:58.334 13:22:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:58.334 13:22:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:12:58.334 13:22:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:58.334 13:22:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:58.334 13:22:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:12:58.334 13:22:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:58.334 13:22:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:58.334 13:22:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:12:58.334 13:22:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:12:58.334 13:22:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:12:58.334 13:22:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:58.334 13:22:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:12:58.334 13:22:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:58.334 13:22:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:12:58.334 13:22:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:58.334 13:22:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:12:58.334 13:22:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:58.334 13:22:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:58.334 13:22:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:12:58.594 /dev/nbd1 00:12:58.594 13:22:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:58.594 13:22:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:58.594 13:22:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:12:58.594 13:22:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:12:58.594 13:22:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:58.594 13:22:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:58.594 13:22:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:12:58.594 13:22:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:12:58.594 13:22:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:58.594 13:22:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:58.594 13:22:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:58.594 1+0 records in 00:12:58.594 1+0 records out 00:12:58.594 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00033976 s, 12.1 MB/s 00:12:58.594 13:22:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:58.594 13:22:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:12:58.594 13:22:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:58.594 13:22:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:58.594 13:22:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:12:58.594 13:22:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:58.594 13:22:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:58.594 13:22:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:12:58.854 13:22:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:12:58.854 13:22:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:58.854 13:22:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:12:58.854 13:22:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:58.854 13:22:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:12:58.854 13:22:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:58.854 13:22:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:58.854 13:22:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:58.854 13:22:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:58.854 13:22:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:58.854 13:22:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:58.854 13:22:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:58.854 13:22:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:59.115 13:22:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:12:59.115 13:22:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:12:59.115 13:22:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:59.115 13:22:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:59.115 13:22:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:59.115 13:22:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:59.115 13:22:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:12:59.115 13:22:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:59.115 13:22:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:59.115 13:22:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:59.115 13:22:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:59.115 13:22:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:59.115 13:22:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:59.115 13:22:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:59.115 13:22:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:59.115 13:22:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:12:59.115 13:22:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:12:59.115 13:22:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:12:59.115 13:22:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:12:59.115 13:22:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.115 13:22:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:59.115 13:22:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.115 13:22:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:59.115 13:22:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.115 13:22:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:59.115 [2024-11-17 13:22:48.313513] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:59.115 [2024-11-17 13:22:48.313571] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:59.115 [2024-11-17 13:22:48.313602] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:12:59.115 [2024-11-17 13:22:48.313612] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:59.115 [2024-11-17 13:22:48.315845] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:59.115 [2024-11-17 13:22:48.315889] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:59.115 [2024-11-17 13:22:48.315981] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:12:59.115 [2024-11-17 13:22:48.316037] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:59.115 [2024-11-17 13:22:48.316178] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:59.115 spare 00:12:59.115 13:22:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.115 13:22:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:12:59.115 13:22:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.115 13:22:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:59.376 [2024-11-17 13:22:48.416109] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:12:59.376 [2024-11-17 13:22:48.416172] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:59.376 [2024-11-17 13:22:48.416560] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b0d0 00:12:59.376 [2024-11-17 13:22:48.416876] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:12:59.376 [2024-11-17 13:22:48.416897] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:12:59.376 [2024-11-17 13:22:48.417107] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:59.376 13:22:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.376 13:22:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:59.376 13:22:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:59.376 13:22:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:59.376 13:22:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:59.376 13:22:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:59.376 13:22:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:59.376 13:22:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:59.376 13:22:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:59.376 13:22:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:59.376 13:22:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:59.376 13:22:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:59.376 13:22:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.376 13:22:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:59.376 13:22:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:59.376 13:22:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.376 13:22:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:59.376 "name": "raid_bdev1", 00:12:59.376 "uuid": "12d45c08-9b35-458d-83da-f43e7ad38658", 00:12:59.376 "strip_size_kb": 0, 00:12:59.376 "state": "online", 00:12:59.376 "raid_level": "raid1", 00:12:59.376 "superblock": true, 00:12:59.376 "num_base_bdevs": 2, 00:12:59.376 "num_base_bdevs_discovered": 2, 00:12:59.376 "num_base_bdevs_operational": 2, 00:12:59.376 "base_bdevs_list": [ 00:12:59.376 { 00:12:59.376 "name": "spare", 00:12:59.376 "uuid": "049c22f4-60fc-5aa3-b145-1277f76e6ef2", 00:12:59.376 "is_configured": true, 00:12:59.376 "data_offset": 2048, 00:12:59.376 "data_size": 63488 00:12:59.376 }, 00:12:59.376 { 00:12:59.376 "name": "BaseBdev2", 00:12:59.376 "uuid": "c5c5473c-b956-51d9-8911-ee13e93a776a", 00:12:59.376 "is_configured": true, 00:12:59.376 "data_offset": 2048, 00:12:59.376 "data_size": 63488 00:12:59.376 } 00:12:59.376 ] 00:12:59.376 }' 00:12:59.376 13:22:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:59.376 13:22:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:59.945 13:22:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:59.945 13:22:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:59.945 13:22:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:59.945 13:22:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:59.945 13:22:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:59.945 13:22:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:59.945 13:22:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:59.945 13:22:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.945 13:22:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:59.945 13:22:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.945 13:22:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:59.945 "name": "raid_bdev1", 00:12:59.945 "uuid": "12d45c08-9b35-458d-83da-f43e7ad38658", 00:12:59.945 "strip_size_kb": 0, 00:12:59.945 "state": "online", 00:12:59.945 "raid_level": "raid1", 00:12:59.945 "superblock": true, 00:12:59.945 "num_base_bdevs": 2, 00:12:59.945 "num_base_bdevs_discovered": 2, 00:12:59.945 "num_base_bdevs_operational": 2, 00:12:59.945 "base_bdevs_list": [ 00:12:59.945 { 00:12:59.945 "name": "spare", 00:12:59.945 "uuid": "049c22f4-60fc-5aa3-b145-1277f76e6ef2", 00:12:59.945 "is_configured": true, 00:12:59.945 "data_offset": 2048, 00:12:59.945 "data_size": 63488 00:12:59.945 }, 00:12:59.945 { 00:12:59.945 "name": "BaseBdev2", 00:12:59.945 "uuid": "c5c5473c-b956-51d9-8911-ee13e93a776a", 00:12:59.945 "is_configured": true, 00:12:59.945 "data_offset": 2048, 00:12:59.945 "data_size": 63488 00:12:59.945 } 00:12:59.945 ] 00:12:59.945 }' 00:12:59.945 13:22:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:59.945 13:22:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:59.945 13:22:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:59.945 13:22:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:59.945 13:22:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:12:59.945 13:22:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:59.945 13:22:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.945 13:22:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:59.945 13:22:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.945 13:22:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:12:59.945 13:22:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:59.945 13:22:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.945 13:22:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:59.945 [2024-11-17 13:22:49.056356] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:59.945 13:22:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.945 13:22:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:59.945 13:22:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:59.945 13:22:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:59.945 13:22:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:59.945 13:22:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:59.945 13:22:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:59.945 13:22:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:59.945 13:22:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:59.945 13:22:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:59.945 13:22:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:59.945 13:22:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:59.945 13:22:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:59.945 13:22:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.945 13:22:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:59.945 13:22:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.945 13:22:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:59.945 "name": "raid_bdev1", 00:12:59.945 "uuid": "12d45c08-9b35-458d-83da-f43e7ad38658", 00:12:59.945 "strip_size_kb": 0, 00:12:59.945 "state": "online", 00:12:59.945 "raid_level": "raid1", 00:12:59.945 "superblock": true, 00:12:59.945 "num_base_bdevs": 2, 00:12:59.945 "num_base_bdevs_discovered": 1, 00:12:59.945 "num_base_bdevs_operational": 1, 00:12:59.945 "base_bdevs_list": [ 00:12:59.945 { 00:12:59.945 "name": null, 00:12:59.945 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:59.945 "is_configured": false, 00:12:59.945 "data_offset": 0, 00:12:59.945 "data_size": 63488 00:12:59.945 }, 00:12:59.945 { 00:12:59.945 "name": "BaseBdev2", 00:12:59.945 "uuid": "c5c5473c-b956-51d9-8911-ee13e93a776a", 00:12:59.945 "is_configured": true, 00:12:59.945 "data_offset": 2048, 00:12:59.945 "data_size": 63488 00:12:59.945 } 00:12:59.945 ] 00:12:59.945 }' 00:12:59.945 13:22:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:59.945 13:22:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:00.513 13:22:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:00.513 13:22:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.513 13:22:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:00.513 [2024-11-17 13:22:49.495691] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:00.513 [2024-11-17 13:22:49.495887] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:13:00.513 [2024-11-17 13:22:49.495903] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:00.513 [2024-11-17 13:22:49.495940] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:00.513 [2024-11-17 13:22:49.511839] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b1a0 00:13:00.513 13:22:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.513 13:22:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:13:00.513 [2024-11-17 13:22:49.513828] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:01.476 13:22:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:01.476 13:22:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:01.476 13:22:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:01.476 13:22:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:01.476 13:22:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:01.476 13:22:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:01.476 13:22:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:01.476 13:22:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.476 13:22:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:01.476 13:22:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.476 13:22:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:01.476 "name": "raid_bdev1", 00:13:01.476 "uuid": "12d45c08-9b35-458d-83da-f43e7ad38658", 00:13:01.476 "strip_size_kb": 0, 00:13:01.476 "state": "online", 00:13:01.476 "raid_level": "raid1", 00:13:01.476 "superblock": true, 00:13:01.476 "num_base_bdevs": 2, 00:13:01.476 "num_base_bdevs_discovered": 2, 00:13:01.476 "num_base_bdevs_operational": 2, 00:13:01.476 "process": { 00:13:01.476 "type": "rebuild", 00:13:01.476 "target": "spare", 00:13:01.476 "progress": { 00:13:01.476 "blocks": 20480, 00:13:01.476 "percent": 32 00:13:01.476 } 00:13:01.476 }, 00:13:01.476 "base_bdevs_list": [ 00:13:01.476 { 00:13:01.476 "name": "spare", 00:13:01.476 "uuid": "049c22f4-60fc-5aa3-b145-1277f76e6ef2", 00:13:01.476 "is_configured": true, 00:13:01.476 "data_offset": 2048, 00:13:01.476 "data_size": 63488 00:13:01.476 }, 00:13:01.476 { 00:13:01.476 "name": "BaseBdev2", 00:13:01.476 "uuid": "c5c5473c-b956-51d9-8911-ee13e93a776a", 00:13:01.476 "is_configured": true, 00:13:01.476 "data_offset": 2048, 00:13:01.476 "data_size": 63488 00:13:01.476 } 00:13:01.476 ] 00:13:01.476 }' 00:13:01.476 13:22:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:01.476 13:22:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:01.476 13:22:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:01.476 13:22:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:01.476 13:22:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:13:01.476 13:22:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.476 13:22:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:01.476 [2024-11-17 13:22:50.673812] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:01.737 [2024-11-17 13:22:50.719058] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:01.737 [2024-11-17 13:22:50.719176] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:01.737 [2024-11-17 13:22:50.719193] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:01.737 [2024-11-17 13:22:50.719202] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:01.737 13:22:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.737 13:22:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:01.737 13:22:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:01.737 13:22:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:01.737 13:22:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:01.737 13:22:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:01.737 13:22:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:01.737 13:22:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:01.737 13:22:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:01.737 13:22:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:01.737 13:22:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:01.737 13:22:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:01.737 13:22:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:01.737 13:22:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.737 13:22:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:01.737 13:22:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.737 13:22:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:01.737 "name": "raid_bdev1", 00:13:01.737 "uuid": "12d45c08-9b35-458d-83da-f43e7ad38658", 00:13:01.737 "strip_size_kb": 0, 00:13:01.737 "state": "online", 00:13:01.737 "raid_level": "raid1", 00:13:01.737 "superblock": true, 00:13:01.737 "num_base_bdevs": 2, 00:13:01.737 "num_base_bdevs_discovered": 1, 00:13:01.737 "num_base_bdevs_operational": 1, 00:13:01.737 "base_bdevs_list": [ 00:13:01.737 { 00:13:01.737 "name": null, 00:13:01.737 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:01.737 "is_configured": false, 00:13:01.737 "data_offset": 0, 00:13:01.737 "data_size": 63488 00:13:01.737 }, 00:13:01.737 { 00:13:01.737 "name": "BaseBdev2", 00:13:01.737 "uuid": "c5c5473c-b956-51d9-8911-ee13e93a776a", 00:13:01.737 "is_configured": true, 00:13:01.737 "data_offset": 2048, 00:13:01.737 "data_size": 63488 00:13:01.737 } 00:13:01.737 ] 00:13:01.737 }' 00:13:01.737 13:22:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:01.737 13:22:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:01.997 13:22:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:01.997 13:22:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.997 13:22:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:01.997 [2024-11-17 13:22:51.209010] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:01.997 [2024-11-17 13:22:51.209129] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:01.997 [2024-11-17 13:22:51.209171] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:13:01.997 [2024-11-17 13:22:51.209201] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:01.997 [2024-11-17 13:22:51.209780] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:01.997 [2024-11-17 13:22:51.209848] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:01.997 [2024-11-17 13:22:51.209999] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:01.997 [2024-11-17 13:22:51.210047] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:13:01.997 [2024-11-17 13:22:51.210097] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:01.997 [2024-11-17 13:22:51.210149] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:02.257 [2024-11-17 13:22:51.227172] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b270 00:13:02.257 spare 00:13:02.257 13:22:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.257 13:22:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:13:02.257 [2024-11-17 13:22:51.229147] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:03.197 13:22:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:03.197 13:22:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:03.197 13:22:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:03.197 13:22:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:03.197 13:22:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:03.197 13:22:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:03.197 13:22:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:03.197 13:22:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.197 13:22:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:03.197 13:22:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.197 13:22:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:03.197 "name": "raid_bdev1", 00:13:03.197 "uuid": "12d45c08-9b35-458d-83da-f43e7ad38658", 00:13:03.197 "strip_size_kb": 0, 00:13:03.197 "state": "online", 00:13:03.197 "raid_level": "raid1", 00:13:03.197 "superblock": true, 00:13:03.197 "num_base_bdevs": 2, 00:13:03.197 "num_base_bdevs_discovered": 2, 00:13:03.197 "num_base_bdevs_operational": 2, 00:13:03.197 "process": { 00:13:03.197 "type": "rebuild", 00:13:03.197 "target": "spare", 00:13:03.197 "progress": { 00:13:03.197 "blocks": 20480, 00:13:03.197 "percent": 32 00:13:03.197 } 00:13:03.197 }, 00:13:03.197 "base_bdevs_list": [ 00:13:03.197 { 00:13:03.197 "name": "spare", 00:13:03.197 "uuid": "049c22f4-60fc-5aa3-b145-1277f76e6ef2", 00:13:03.197 "is_configured": true, 00:13:03.197 "data_offset": 2048, 00:13:03.197 "data_size": 63488 00:13:03.197 }, 00:13:03.197 { 00:13:03.197 "name": "BaseBdev2", 00:13:03.197 "uuid": "c5c5473c-b956-51d9-8911-ee13e93a776a", 00:13:03.197 "is_configured": true, 00:13:03.197 "data_offset": 2048, 00:13:03.197 "data_size": 63488 00:13:03.197 } 00:13:03.197 ] 00:13:03.197 }' 00:13:03.197 13:22:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:03.197 13:22:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:03.197 13:22:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:03.197 13:22:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:03.197 13:22:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:13:03.197 13:22:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.197 13:22:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:03.197 [2024-11-17 13:22:52.381095] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:03.457 [2024-11-17 13:22:52.434667] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:03.457 [2024-11-17 13:22:52.434801] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:03.457 [2024-11-17 13:22:52.434823] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:03.457 [2024-11-17 13:22:52.434831] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:03.457 13:22:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.457 13:22:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:03.457 13:22:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:03.457 13:22:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:03.457 13:22:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:03.457 13:22:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:03.457 13:22:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:03.457 13:22:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:03.457 13:22:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:03.457 13:22:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:03.457 13:22:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:03.457 13:22:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:03.457 13:22:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:03.457 13:22:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.457 13:22:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:03.457 13:22:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.457 13:22:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:03.457 "name": "raid_bdev1", 00:13:03.457 "uuid": "12d45c08-9b35-458d-83da-f43e7ad38658", 00:13:03.457 "strip_size_kb": 0, 00:13:03.457 "state": "online", 00:13:03.457 "raid_level": "raid1", 00:13:03.457 "superblock": true, 00:13:03.457 "num_base_bdevs": 2, 00:13:03.457 "num_base_bdevs_discovered": 1, 00:13:03.457 "num_base_bdevs_operational": 1, 00:13:03.457 "base_bdevs_list": [ 00:13:03.457 { 00:13:03.457 "name": null, 00:13:03.457 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:03.457 "is_configured": false, 00:13:03.457 "data_offset": 0, 00:13:03.457 "data_size": 63488 00:13:03.457 }, 00:13:03.457 { 00:13:03.457 "name": "BaseBdev2", 00:13:03.457 "uuid": "c5c5473c-b956-51d9-8911-ee13e93a776a", 00:13:03.457 "is_configured": true, 00:13:03.457 "data_offset": 2048, 00:13:03.457 "data_size": 63488 00:13:03.457 } 00:13:03.457 ] 00:13:03.457 }' 00:13:03.457 13:22:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:03.457 13:22:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:03.717 13:22:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:03.717 13:22:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:03.717 13:22:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:03.717 13:22:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:03.717 13:22:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:03.717 13:22:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:03.717 13:22:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:03.717 13:22:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.717 13:22:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:03.976 13:22:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.976 13:22:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:03.976 "name": "raid_bdev1", 00:13:03.976 "uuid": "12d45c08-9b35-458d-83da-f43e7ad38658", 00:13:03.976 "strip_size_kb": 0, 00:13:03.977 "state": "online", 00:13:03.977 "raid_level": "raid1", 00:13:03.977 "superblock": true, 00:13:03.977 "num_base_bdevs": 2, 00:13:03.977 "num_base_bdevs_discovered": 1, 00:13:03.977 "num_base_bdevs_operational": 1, 00:13:03.977 "base_bdevs_list": [ 00:13:03.977 { 00:13:03.977 "name": null, 00:13:03.977 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:03.977 "is_configured": false, 00:13:03.977 "data_offset": 0, 00:13:03.977 "data_size": 63488 00:13:03.977 }, 00:13:03.977 { 00:13:03.977 "name": "BaseBdev2", 00:13:03.977 "uuid": "c5c5473c-b956-51d9-8911-ee13e93a776a", 00:13:03.977 "is_configured": true, 00:13:03.977 "data_offset": 2048, 00:13:03.977 "data_size": 63488 00:13:03.977 } 00:13:03.977 ] 00:13:03.977 }' 00:13:03.977 13:22:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:03.977 13:22:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:03.977 13:22:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:03.977 13:22:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:03.977 13:22:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:13:03.977 13:22:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.977 13:22:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:03.977 13:22:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.977 13:22:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:03.977 13:22:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.977 13:22:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:03.977 [2024-11-17 13:22:53.092461] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:03.977 [2024-11-17 13:22:53.092515] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:03.977 [2024-11-17 13:22:53.092536] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:13:03.977 [2024-11-17 13:22:53.092544] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:03.977 [2024-11-17 13:22:53.092986] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:03.977 [2024-11-17 13:22:53.093004] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:03.977 [2024-11-17 13:22:53.093081] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:13:03.977 [2024-11-17 13:22:53.093095] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:13:03.977 [2024-11-17 13:22:53.093103] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:03.977 [2024-11-17 13:22:53.093112] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:13:03.977 BaseBdev1 00:13:03.977 13:22:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.977 13:22:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:13:04.916 13:22:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:04.916 13:22:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:04.916 13:22:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:04.916 13:22:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:04.916 13:22:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:04.916 13:22:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:04.916 13:22:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:04.916 13:22:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:04.916 13:22:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:04.916 13:22:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:04.916 13:22:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:04.916 13:22:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:04.916 13:22:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.916 13:22:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:04.916 13:22:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.176 13:22:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:05.176 "name": "raid_bdev1", 00:13:05.176 "uuid": "12d45c08-9b35-458d-83da-f43e7ad38658", 00:13:05.176 "strip_size_kb": 0, 00:13:05.176 "state": "online", 00:13:05.176 "raid_level": "raid1", 00:13:05.176 "superblock": true, 00:13:05.176 "num_base_bdevs": 2, 00:13:05.176 "num_base_bdevs_discovered": 1, 00:13:05.176 "num_base_bdevs_operational": 1, 00:13:05.176 "base_bdevs_list": [ 00:13:05.176 { 00:13:05.176 "name": null, 00:13:05.176 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:05.176 "is_configured": false, 00:13:05.176 "data_offset": 0, 00:13:05.176 "data_size": 63488 00:13:05.176 }, 00:13:05.176 { 00:13:05.176 "name": "BaseBdev2", 00:13:05.176 "uuid": "c5c5473c-b956-51d9-8911-ee13e93a776a", 00:13:05.176 "is_configured": true, 00:13:05.176 "data_offset": 2048, 00:13:05.176 "data_size": 63488 00:13:05.176 } 00:13:05.176 ] 00:13:05.176 }' 00:13:05.176 13:22:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:05.176 13:22:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:05.436 13:22:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:05.436 13:22:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:05.436 13:22:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:05.436 13:22:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:05.436 13:22:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:05.436 13:22:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:05.436 13:22:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:05.436 13:22:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.436 13:22:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:05.436 13:22:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.436 13:22:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:05.436 "name": "raid_bdev1", 00:13:05.436 "uuid": "12d45c08-9b35-458d-83da-f43e7ad38658", 00:13:05.436 "strip_size_kb": 0, 00:13:05.436 "state": "online", 00:13:05.436 "raid_level": "raid1", 00:13:05.436 "superblock": true, 00:13:05.436 "num_base_bdevs": 2, 00:13:05.436 "num_base_bdevs_discovered": 1, 00:13:05.436 "num_base_bdevs_operational": 1, 00:13:05.436 "base_bdevs_list": [ 00:13:05.436 { 00:13:05.436 "name": null, 00:13:05.436 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:05.436 "is_configured": false, 00:13:05.436 "data_offset": 0, 00:13:05.436 "data_size": 63488 00:13:05.436 }, 00:13:05.436 { 00:13:05.436 "name": "BaseBdev2", 00:13:05.436 "uuid": "c5c5473c-b956-51d9-8911-ee13e93a776a", 00:13:05.436 "is_configured": true, 00:13:05.436 "data_offset": 2048, 00:13:05.436 "data_size": 63488 00:13:05.436 } 00:13:05.436 ] 00:13:05.436 }' 00:13:05.436 13:22:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:05.436 13:22:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:05.436 13:22:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:05.696 13:22:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:05.696 13:22:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:05.696 13:22:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # local es=0 00:13:05.696 13:22:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:05.696 13:22:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:13:05.696 13:22:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:05.696 13:22:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:13:05.696 13:22:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:05.696 13:22:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:05.696 13:22:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.696 13:22:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:05.696 [2024-11-17 13:22:54.710099] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:05.696 [2024-11-17 13:22:54.710265] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:13:05.696 [2024-11-17 13:22:54.710281] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:05.696 request: 00:13:05.696 { 00:13:05.696 "base_bdev": "BaseBdev1", 00:13:05.696 "raid_bdev": "raid_bdev1", 00:13:05.696 "method": "bdev_raid_add_base_bdev", 00:13:05.696 "req_id": 1 00:13:05.696 } 00:13:05.696 Got JSON-RPC error response 00:13:05.696 response: 00:13:05.696 { 00:13:05.696 "code": -22, 00:13:05.696 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:13:05.696 } 00:13:05.696 13:22:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:13:05.696 13:22:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # es=1 00:13:05.696 13:22:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:05.696 13:22:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:05.696 13:22:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:05.696 13:22:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:13:06.636 13:22:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:06.636 13:22:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:06.636 13:22:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:06.636 13:22:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:06.636 13:22:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:06.636 13:22:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:06.636 13:22:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:06.636 13:22:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:06.636 13:22:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:06.636 13:22:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:06.636 13:22:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:06.636 13:22:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:06.636 13:22:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.636 13:22:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:06.636 13:22:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.636 13:22:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:06.636 "name": "raid_bdev1", 00:13:06.636 "uuid": "12d45c08-9b35-458d-83da-f43e7ad38658", 00:13:06.636 "strip_size_kb": 0, 00:13:06.636 "state": "online", 00:13:06.636 "raid_level": "raid1", 00:13:06.636 "superblock": true, 00:13:06.636 "num_base_bdevs": 2, 00:13:06.636 "num_base_bdevs_discovered": 1, 00:13:06.636 "num_base_bdevs_operational": 1, 00:13:06.636 "base_bdevs_list": [ 00:13:06.636 { 00:13:06.636 "name": null, 00:13:06.636 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:06.636 "is_configured": false, 00:13:06.636 "data_offset": 0, 00:13:06.636 "data_size": 63488 00:13:06.636 }, 00:13:06.636 { 00:13:06.636 "name": "BaseBdev2", 00:13:06.636 "uuid": "c5c5473c-b956-51d9-8911-ee13e93a776a", 00:13:06.636 "is_configured": true, 00:13:06.636 "data_offset": 2048, 00:13:06.636 "data_size": 63488 00:13:06.636 } 00:13:06.636 ] 00:13:06.636 }' 00:13:06.636 13:22:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:06.636 13:22:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:07.207 13:22:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:07.207 13:22:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:07.207 13:22:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:07.207 13:22:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:07.207 13:22:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:07.207 13:22:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:07.207 13:22:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:07.207 13:22:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.207 13:22:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:07.207 13:22:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.207 13:22:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:07.207 "name": "raid_bdev1", 00:13:07.207 "uuid": "12d45c08-9b35-458d-83da-f43e7ad38658", 00:13:07.207 "strip_size_kb": 0, 00:13:07.207 "state": "online", 00:13:07.207 "raid_level": "raid1", 00:13:07.207 "superblock": true, 00:13:07.207 "num_base_bdevs": 2, 00:13:07.207 "num_base_bdevs_discovered": 1, 00:13:07.207 "num_base_bdevs_operational": 1, 00:13:07.207 "base_bdevs_list": [ 00:13:07.207 { 00:13:07.207 "name": null, 00:13:07.207 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:07.207 "is_configured": false, 00:13:07.207 "data_offset": 0, 00:13:07.207 "data_size": 63488 00:13:07.207 }, 00:13:07.207 { 00:13:07.207 "name": "BaseBdev2", 00:13:07.207 "uuid": "c5c5473c-b956-51d9-8911-ee13e93a776a", 00:13:07.207 "is_configured": true, 00:13:07.207 "data_offset": 2048, 00:13:07.207 "data_size": 63488 00:13:07.207 } 00:13:07.207 ] 00:13:07.207 }' 00:13:07.207 13:22:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:07.207 13:22:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:07.207 13:22:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:07.207 13:22:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:07.207 13:22:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 76750 00:13:07.207 13:22:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # '[' -z 76750 ']' 00:13:07.207 13:22:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # kill -0 76750 00:13:07.207 13:22:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # uname 00:13:07.207 13:22:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:07.207 13:22:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76750 00:13:07.207 13:22:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:07.207 killing process with pid 76750 00:13:07.207 Received shutdown signal, test time was about 16.817273 seconds 00:13:07.207 00:13:07.207 Latency(us) 00:13:07.207 [2024-11-17T13:22:56.431Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:07.207 [2024-11-17T13:22:56.431Z] =================================================================================================================== 00:13:07.207 [2024-11-17T13:22:56.431Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:07.207 13:22:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:07.207 13:22:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76750' 00:13:07.207 13:22:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@973 -- # kill 76750 00:13:07.207 [2024-11-17 13:22:56.345409] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:07.207 [2024-11-17 13:22:56.345535] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:07.207 13:22:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@978 -- # wait 76750 00:13:07.207 [2024-11-17 13:22:56.345587] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:07.207 [2024-11-17 13:22:56.345598] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:13:07.467 [2024-11-17 13:22:56.570917] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:08.849 ************************************ 00:13:08.849 END TEST raid_rebuild_test_sb_io 00:13:08.849 13:22:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:13:08.849 00:13:08.849 real 0m20.063s 00:13:08.849 user 0m26.212s 00:13:08.849 sys 0m2.270s 00:13:08.849 13:22:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:08.849 13:22:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:08.849 ************************************ 00:13:08.849 13:22:57 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:13:08.849 13:22:57 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false true 00:13:08.849 13:22:57 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:13:08.849 13:22:57 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:08.849 13:22:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:08.849 ************************************ 00:13:08.849 START TEST raid_rebuild_test 00:13:08.849 ************************************ 00:13:08.849 13:22:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 false false true 00:13:08.849 13:22:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:08.849 13:22:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:13:08.849 13:22:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:13:08.849 13:22:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:13:08.849 13:22:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:08.849 13:22:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:08.849 13:22:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:08.849 13:22:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:08.849 13:22:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:08.849 13:22:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:08.849 13:22:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:08.850 13:22:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:08.850 13:22:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:08.850 13:22:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:13:08.850 13:22:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:08.850 13:22:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:08.850 13:22:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:13:08.850 13:22:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:08.850 13:22:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:08.850 13:22:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:08.850 13:22:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:08.850 13:22:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:08.850 13:22:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:08.850 13:22:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:08.850 13:22:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:08.850 13:22:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:08.850 13:22:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:08.850 13:22:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:08.850 13:22:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:13:08.850 13:22:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=77439 00:13:08.850 13:22:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 77439 00:13:08.850 13:22:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:08.850 13:22:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 77439 ']' 00:13:08.850 13:22:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:08.850 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:08.850 13:22:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:08.850 13:22:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:08.850 13:22:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:08.850 13:22:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.850 [2024-11-17 13:22:57.870693] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:13:08.850 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:08.850 Zero copy mechanism will not be used. 00:13:08.850 [2024-11-17 13:22:57.870878] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77439 ] 00:13:08.850 [2024-11-17 13:22:58.042593] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:09.110 [2024-11-17 13:22:58.152427] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:09.370 [2024-11-17 13:22:58.356832] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:09.370 [2024-11-17 13:22:58.356866] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:09.629 13:22:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:09.629 13:22:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:13:09.629 13:22:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:09.629 13:22:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:09.629 13:22:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.629 13:22:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.629 BaseBdev1_malloc 00:13:09.629 13:22:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.629 13:22:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:09.629 13:22:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.630 13:22:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.630 [2024-11-17 13:22:58.747035] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:09.630 [2024-11-17 13:22:58.747156] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:09.630 [2024-11-17 13:22:58.747185] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:09.630 [2024-11-17 13:22:58.747196] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:09.630 [2024-11-17 13:22:58.749226] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:09.630 [2024-11-17 13:22:58.749263] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:09.630 BaseBdev1 00:13:09.630 13:22:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.630 13:22:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:09.630 13:22:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:09.630 13:22:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.630 13:22:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.630 BaseBdev2_malloc 00:13:09.630 13:22:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.630 13:22:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:09.630 13:22:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.630 13:22:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.630 [2024-11-17 13:22:58.802449] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:09.630 [2024-11-17 13:22:58.802504] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:09.630 [2024-11-17 13:22:58.802523] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:09.630 [2024-11-17 13:22:58.802535] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:09.630 [2024-11-17 13:22:58.804539] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:09.630 [2024-11-17 13:22:58.804626] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:09.630 BaseBdev2 00:13:09.630 13:22:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.630 13:22:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:09.630 13:22:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:09.630 13:22:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.630 13:22:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.890 BaseBdev3_malloc 00:13:09.890 13:22:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.890 13:22:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:13:09.890 13:22:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.890 13:22:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.890 [2024-11-17 13:22:58.873760] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:13:09.890 [2024-11-17 13:22:58.873811] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:09.890 [2024-11-17 13:22:58.873832] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:09.890 [2024-11-17 13:22:58.873843] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:09.890 [2024-11-17 13:22:58.875944] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:09.890 [2024-11-17 13:22:58.875985] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:09.890 BaseBdev3 00:13:09.890 13:22:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.890 13:22:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:09.890 13:22:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:09.890 13:22:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.890 13:22:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.890 BaseBdev4_malloc 00:13:09.890 13:22:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.890 13:22:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:13:09.890 13:22:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.890 13:22:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.890 [2024-11-17 13:22:58.930096] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:13:09.890 [2024-11-17 13:22:58.930149] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:09.890 [2024-11-17 13:22:58.930167] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:13:09.890 [2024-11-17 13:22:58.930178] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:09.890 [2024-11-17 13:22:58.932386] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:09.890 [2024-11-17 13:22:58.932426] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:09.890 BaseBdev4 00:13:09.890 13:22:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.890 13:22:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:09.890 13:22:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.890 13:22:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.890 spare_malloc 00:13:09.890 13:22:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.890 13:22:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:09.890 13:22:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.890 13:22:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.890 spare_delay 00:13:09.890 13:22:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.890 13:22:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:09.890 13:22:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.890 13:22:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.890 [2024-11-17 13:22:58.997087] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:09.890 [2024-11-17 13:22:58.997142] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:09.890 [2024-11-17 13:22:58.997163] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:13:09.890 [2024-11-17 13:22:58.997172] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:09.890 [2024-11-17 13:22:58.999291] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:09.890 [2024-11-17 13:22:58.999325] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:09.890 spare 00:13:09.890 13:22:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.890 13:22:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:13:09.890 13:22:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.890 13:22:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.890 [2024-11-17 13:22:59.009113] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:09.890 [2024-11-17 13:22:59.010942] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:09.890 [2024-11-17 13:22:59.011006] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:09.890 [2024-11-17 13:22:59.011056] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:09.890 [2024-11-17 13:22:59.011127] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:09.890 [2024-11-17 13:22:59.011139] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:13:09.891 [2024-11-17 13:22:59.011400] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:13:09.891 [2024-11-17 13:22:59.011565] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:09.891 [2024-11-17 13:22:59.011577] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:09.891 [2024-11-17 13:22:59.011709] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:09.891 13:22:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.891 13:22:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:13:09.891 13:22:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:09.891 13:22:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:09.891 13:22:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:09.891 13:22:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:09.891 13:22:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:09.891 13:22:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:09.891 13:22:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:09.891 13:22:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:09.891 13:22:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:09.891 13:22:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:09.891 13:22:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:09.891 13:22:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.891 13:22:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.891 13:22:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.891 13:22:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:09.891 "name": "raid_bdev1", 00:13:09.891 "uuid": "d45fc88b-d13e-4424-98f3-34604c4c4582", 00:13:09.891 "strip_size_kb": 0, 00:13:09.891 "state": "online", 00:13:09.891 "raid_level": "raid1", 00:13:09.891 "superblock": false, 00:13:09.891 "num_base_bdevs": 4, 00:13:09.891 "num_base_bdevs_discovered": 4, 00:13:09.891 "num_base_bdevs_operational": 4, 00:13:09.891 "base_bdevs_list": [ 00:13:09.891 { 00:13:09.891 "name": "BaseBdev1", 00:13:09.891 "uuid": "a56d0542-4d2e-5184-a546-9e93b363d882", 00:13:09.891 "is_configured": true, 00:13:09.891 "data_offset": 0, 00:13:09.891 "data_size": 65536 00:13:09.891 }, 00:13:09.891 { 00:13:09.891 "name": "BaseBdev2", 00:13:09.891 "uuid": "08aa83e3-0454-5b50-8391-22d3f57d2d47", 00:13:09.891 "is_configured": true, 00:13:09.891 "data_offset": 0, 00:13:09.891 "data_size": 65536 00:13:09.891 }, 00:13:09.891 { 00:13:09.891 "name": "BaseBdev3", 00:13:09.891 "uuid": "33571580-0cec-535a-9890-a925ede894a3", 00:13:09.891 "is_configured": true, 00:13:09.891 "data_offset": 0, 00:13:09.891 "data_size": 65536 00:13:09.891 }, 00:13:09.891 { 00:13:09.891 "name": "BaseBdev4", 00:13:09.891 "uuid": "ce53180b-a0cc-5826-94d1-b88032710076", 00:13:09.891 "is_configured": true, 00:13:09.891 "data_offset": 0, 00:13:09.891 "data_size": 65536 00:13:09.891 } 00:13:09.891 ] 00:13:09.891 }' 00:13:09.891 13:22:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:09.891 13:22:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.480 13:22:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:10.481 13:22:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:10.481 13:22:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.481 13:22:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.481 [2024-11-17 13:22:59.452666] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:10.481 13:22:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.481 13:22:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:13:10.481 13:22:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:10.481 13:22:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:10.481 13:22:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.481 13:22:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.481 13:22:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.481 13:22:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:13:10.481 13:22:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:13:10.481 13:22:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:13:10.481 13:22:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:13:10.481 13:22:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:13:10.481 13:22:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:10.481 13:22:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:13:10.481 13:22:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:10.481 13:22:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:10.481 13:22:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:10.481 13:22:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:13:10.481 13:22:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:10.481 13:22:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:10.481 13:22:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:13:10.740 [2024-11-17 13:22:59.731913] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:13:10.740 /dev/nbd0 00:13:10.740 13:22:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:10.740 13:22:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:10.740 13:22:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:10.740 13:22:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:13:10.740 13:22:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:10.740 13:22:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:10.740 13:22:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:10.740 13:22:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:13:10.740 13:22:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:10.740 13:22:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:10.740 13:22:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:10.740 1+0 records in 00:13:10.740 1+0 records out 00:13:10.740 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000217328 s, 18.8 MB/s 00:13:10.740 13:22:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:10.740 13:22:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:13:10.740 13:22:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:10.740 13:22:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:10.740 13:22:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:13:10.740 13:22:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:10.740 13:22:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:10.740 13:22:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:13:10.740 13:22:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:13:10.740 13:22:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:13:17.339 65536+0 records in 00:13:17.339 65536+0 records out 00:13:17.339 33554432 bytes (34 MB, 32 MiB) copied, 5.73851 s, 5.8 MB/s 00:13:17.339 13:23:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:17.339 13:23:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:17.339 13:23:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:17.339 13:23:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:17.339 13:23:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:13:17.339 13:23:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:17.339 13:23:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:17.339 13:23:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:17.339 [2024-11-17 13:23:05.753644] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:17.339 13:23:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:17.339 13:23:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:17.339 13:23:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:17.339 13:23:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:17.339 13:23:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:17.339 13:23:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:13:17.339 13:23:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:13:17.339 13:23:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:17.340 13:23:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.340 13:23:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.340 [2024-11-17 13:23:05.777695] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:17.340 13:23:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.340 13:23:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:17.340 13:23:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:17.340 13:23:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:17.340 13:23:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:17.340 13:23:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:17.340 13:23:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:17.340 13:23:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:17.340 13:23:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:17.340 13:23:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:17.340 13:23:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:17.340 13:23:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:17.340 13:23:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:17.340 13:23:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.340 13:23:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.340 13:23:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.340 13:23:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:17.340 "name": "raid_bdev1", 00:13:17.340 "uuid": "d45fc88b-d13e-4424-98f3-34604c4c4582", 00:13:17.340 "strip_size_kb": 0, 00:13:17.340 "state": "online", 00:13:17.340 "raid_level": "raid1", 00:13:17.340 "superblock": false, 00:13:17.340 "num_base_bdevs": 4, 00:13:17.340 "num_base_bdevs_discovered": 3, 00:13:17.340 "num_base_bdevs_operational": 3, 00:13:17.340 "base_bdevs_list": [ 00:13:17.340 { 00:13:17.340 "name": null, 00:13:17.340 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:17.340 "is_configured": false, 00:13:17.340 "data_offset": 0, 00:13:17.340 "data_size": 65536 00:13:17.340 }, 00:13:17.340 { 00:13:17.340 "name": "BaseBdev2", 00:13:17.340 "uuid": "08aa83e3-0454-5b50-8391-22d3f57d2d47", 00:13:17.340 "is_configured": true, 00:13:17.340 "data_offset": 0, 00:13:17.340 "data_size": 65536 00:13:17.340 }, 00:13:17.340 { 00:13:17.340 "name": "BaseBdev3", 00:13:17.340 "uuid": "33571580-0cec-535a-9890-a925ede894a3", 00:13:17.340 "is_configured": true, 00:13:17.340 "data_offset": 0, 00:13:17.340 "data_size": 65536 00:13:17.340 }, 00:13:17.340 { 00:13:17.340 "name": "BaseBdev4", 00:13:17.340 "uuid": "ce53180b-a0cc-5826-94d1-b88032710076", 00:13:17.340 "is_configured": true, 00:13:17.340 "data_offset": 0, 00:13:17.340 "data_size": 65536 00:13:17.340 } 00:13:17.340 ] 00:13:17.340 }' 00:13:17.340 13:23:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:17.340 13:23:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.340 13:23:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:17.340 13:23:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.340 13:23:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.340 [2024-11-17 13:23:06.240872] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:17.340 [2024-11-17 13:23:06.256338] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09d70 00:13:17.340 13:23:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.340 13:23:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:17.340 [2024-11-17 13:23:06.258300] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:18.279 13:23:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:18.279 13:23:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:18.279 13:23:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:18.279 13:23:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:18.279 13:23:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:18.279 13:23:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:18.279 13:23:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:18.279 13:23:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.279 13:23:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.279 13:23:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.279 13:23:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:18.279 "name": "raid_bdev1", 00:13:18.279 "uuid": "d45fc88b-d13e-4424-98f3-34604c4c4582", 00:13:18.279 "strip_size_kb": 0, 00:13:18.279 "state": "online", 00:13:18.279 "raid_level": "raid1", 00:13:18.279 "superblock": false, 00:13:18.279 "num_base_bdevs": 4, 00:13:18.279 "num_base_bdevs_discovered": 4, 00:13:18.279 "num_base_bdevs_operational": 4, 00:13:18.279 "process": { 00:13:18.279 "type": "rebuild", 00:13:18.279 "target": "spare", 00:13:18.279 "progress": { 00:13:18.279 "blocks": 20480, 00:13:18.279 "percent": 31 00:13:18.279 } 00:13:18.279 }, 00:13:18.279 "base_bdevs_list": [ 00:13:18.279 { 00:13:18.279 "name": "spare", 00:13:18.279 "uuid": "b26a3fe9-f5cc-56d6-9921-49e4a756a2f3", 00:13:18.279 "is_configured": true, 00:13:18.279 "data_offset": 0, 00:13:18.280 "data_size": 65536 00:13:18.280 }, 00:13:18.280 { 00:13:18.280 "name": "BaseBdev2", 00:13:18.280 "uuid": "08aa83e3-0454-5b50-8391-22d3f57d2d47", 00:13:18.280 "is_configured": true, 00:13:18.280 "data_offset": 0, 00:13:18.280 "data_size": 65536 00:13:18.280 }, 00:13:18.280 { 00:13:18.280 "name": "BaseBdev3", 00:13:18.280 "uuid": "33571580-0cec-535a-9890-a925ede894a3", 00:13:18.280 "is_configured": true, 00:13:18.280 "data_offset": 0, 00:13:18.280 "data_size": 65536 00:13:18.280 }, 00:13:18.280 { 00:13:18.280 "name": "BaseBdev4", 00:13:18.280 "uuid": "ce53180b-a0cc-5826-94d1-b88032710076", 00:13:18.280 "is_configured": true, 00:13:18.280 "data_offset": 0, 00:13:18.280 "data_size": 65536 00:13:18.280 } 00:13:18.280 ] 00:13:18.280 }' 00:13:18.280 13:23:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:18.280 13:23:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:18.280 13:23:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:18.280 13:23:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:18.280 13:23:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:18.280 13:23:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.280 13:23:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.280 [2024-11-17 13:23:07.425634] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:18.280 [2024-11-17 13:23:07.463319] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:18.280 [2024-11-17 13:23:07.463380] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:18.280 [2024-11-17 13:23:07.463396] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:18.280 [2024-11-17 13:23:07.463405] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:18.280 13:23:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.280 13:23:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:18.280 13:23:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:18.280 13:23:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:18.280 13:23:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:18.280 13:23:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:18.280 13:23:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:18.280 13:23:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:18.280 13:23:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:18.280 13:23:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:18.280 13:23:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:18.280 13:23:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:18.280 13:23:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:18.280 13:23:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.280 13:23:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.539 13:23:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.539 13:23:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:18.539 "name": "raid_bdev1", 00:13:18.539 "uuid": "d45fc88b-d13e-4424-98f3-34604c4c4582", 00:13:18.539 "strip_size_kb": 0, 00:13:18.539 "state": "online", 00:13:18.539 "raid_level": "raid1", 00:13:18.539 "superblock": false, 00:13:18.539 "num_base_bdevs": 4, 00:13:18.539 "num_base_bdevs_discovered": 3, 00:13:18.539 "num_base_bdevs_operational": 3, 00:13:18.539 "base_bdevs_list": [ 00:13:18.539 { 00:13:18.539 "name": null, 00:13:18.539 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:18.539 "is_configured": false, 00:13:18.539 "data_offset": 0, 00:13:18.539 "data_size": 65536 00:13:18.539 }, 00:13:18.539 { 00:13:18.539 "name": "BaseBdev2", 00:13:18.539 "uuid": "08aa83e3-0454-5b50-8391-22d3f57d2d47", 00:13:18.539 "is_configured": true, 00:13:18.539 "data_offset": 0, 00:13:18.539 "data_size": 65536 00:13:18.539 }, 00:13:18.539 { 00:13:18.539 "name": "BaseBdev3", 00:13:18.539 "uuid": "33571580-0cec-535a-9890-a925ede894a3", 00:13:18.539 "is_configured": true, 00:13:18.539 "data_offset": 0, 00:13:18.539 "data_size": 65536 00:13:18.539 }, 00:13:18.539 { 00:13:18.539 "name": "BaseBdev4", 00:13:18.539 "uuid": "ce53180b-a0cc-5826-94d1-b88032710076", 00:13:18.539 "is_configured": true, 00:13:18.539 "data_offset": 0, 00:13:18.539 "data_size": 65536 00:13:18.540 } 00:13:18.540 ] 00:13:18.540 }' 00:13:18.540 13:23:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:18.540 13:23:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.799 13:23:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:18.799 13:23:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:18.799 13:23:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:18.799 13:23:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:18.799 13:23:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:18.799 13:23:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:18.799 13:23:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:18.799 13:23:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.799 13:23:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.799 13:23:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.799 13:23:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:18.799 "name": "raid_bdev1", 00:13:18.799 "uuid": "d45fc88b-d13e-4424-98f3-34604c4c4582", 00:13:18.799 "strip_size_kb": 0, 00:13:18.799 "state": "online", 00:13:18.799 "raid_level": "raid1", 00:13:18.799 "superblock": false, 00:13:18.799 "num_base_bdevs": 4, 00:13:18.799 "num_base_bdevs_discovered": 3, 00:13:18.799 "num_base_bdevs_operational": 3, 00:13:18.799 "base_bdevs_list": [ 00:13:18.799 { 00:13:18.799 "name": null, 00:13:18.799 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:18.799 "is_configured": false, 00:13:18.799 "data_offset": 0, 00:13:18.799 "data_size": 65536 00:13:18.799 }, 00:13:18.799 { 00:13:18.799 "name": "BaseBdev2", 00:13:18.799 "uuid": "08aa83e3-0454-5b50-8391-22d3f57d2d47", 00:13:18.799 "is_configured": true, 00:13:18.799 "data_offset": 0, 00:13:18.799 "data_size": 65536 00:13:18.799 }, 00:13:18.799 { 00:13:18.799 "name": "BaseBdev3", 00:13:18.799 "uuid": "33571580-0cec-535a-9890-a925ede894a3", 00:13:18.799 "is_configured": true, 00:13:18.799 "data_offset": 0, 00:13:18.799 "data_size": 65536 00:13:18.799 }, 00:13:18.799 { 00:13:18.799 "name": "BaseBdev4", 00:13:18.799 "uuid": "ce53180b-a0cc-5826-94d1-b88032710076", 00:13:18.799 "is_configured": true, 00:13:18.799 "data_offset": 0, 00:13:18.799 "data_size": 65536 00:13:18.799 } 00:13:18.799 ] 00:13:18.799 }' 00:13:18.799 13:23:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:18.799 13:23:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:18.799 13:23:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:19.059 13:23:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:19.059 13:23:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:19.059 13:23:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.059 13:23:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.059 [2024-11-17 13:23:08.059904] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:19.059 [2024-11-17 13:23:08.074570] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09e40 00:13:19.059 13:23:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.059 13:23:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:19.059 [2024-11-17 13:23:08.076406] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:19.999 13:23:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:19.999 13:23:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:19.999 13:23:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:19.999 13:23:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:19.999 13:23:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:19.999 13:23:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:19.999 13:23:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:19.999 13:23:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.999 13:23:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.999 13:23:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.999 13:23:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:19.999 "name": "raid_bdev1", 00:13:19.999 "uuid": "d45fc88b-d13e-4424-98f3-34604c4c4582", 00:13:19.999 "strip_size_kb": 0, 00:13:19.999 "state": "online", 00:13:19.999 "raid_level": "raid1", 00:13:19.999 "superblock": false, 00:13:19.999 "num_base_bdevs": 4, 00:13:19.999 "num_base_bdevs_discovered": 4, 00:13:19.999 "num_base_bdevs_operational": 4, 00:13:19.999 "process": { 00:13:19.999 "type": "rebuild", 00:13:19.999 "target": "spare", 00:13:19.999 "progress": { 00:13:19.999 "blocks": 20480, 00:13:19.999 "percent": 31 00:13:19.999 } 00:13:19.999 }, 00:13:19.999 "base_bdevs_list": [ 00:13:19.999 { 00:13:19.999 "name": "spare", 00:13:19.999 "uuid": "b26a3fe9-f5cc-56d6-9921-49e4a756a2f3", 00:13:19.999 "is_configured": true, 00:13:19.999 "data_offset": 0, 00:13:19.999 "data_size": 65536 00:13:19.999 }, 00:13:19.999 { 00:13:19.999 "name": "BaseBdev2", 00:13:19.999 "uuid": "08aa83e3-0454-5b50-8391-22d3f57d2d47", 00:13:19.999 "is_configured": true, 00:13:19.999 "data_offset": 0, 00:13:19.999 "data_size": 65536 00:13:19.999 }, 00:13:19.999 { 00:13:19.999 "name": "BaseBdev3", 00:13:19.999 "uuid": "33571580-0cec-535a-9890-a925ede894a3", 00:13:19.999 "is_configured": true, 00:13:19.999 "data_offset": 0, 00:13:19.999 "data_size": 65536 00:13:19.999 }, 00:13:19.999 { 00:13:19.999 "name": "BaseBdev4", 00:13:19.999 "uuid": "ce53180b-a0cc-5826-94d1-b88032710076", 00:13:19.999 "is_configured": true, 00:13:19.999 "data_offset": 0, 00:13:19.999 "data_size": 65536 00:13:19.999 } 00:13:19.999 ] 00:13:19.999 }' 00:13:19.999 13:23:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:19.999 13:23:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:19.999 13:23:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:19.999 13:23:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:19.999 13:23:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:13:19.999 13:23:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:13:19.999 13:23:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:19.999 13:23:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:13:19.999 13:23:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:19.999 13:23:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.999 13:23:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.259 [2024-11-17 13:23:09.223781] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:20.259 [2024-11-17 13:23:09.281135] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000d09e40 00:13:20.259 13:23:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.259 13:23:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:13:20.259 13:23:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:13:20.259 13:23:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:20.259 13:23:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:20.259 13:23:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:20.259 13:23:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:20.259 13:23:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:20.259 13:23:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:20.259 13:23:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:20.259 13:23:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.259 13:23:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.259 13:23:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.259 13:23:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:20.259 "name": "raid_bdev1", 00:13:20.259 "uuid": "d45fc88b-d13e-4424-98f3-34604c4c4582", 00:13:20.259 "strip_size_kb": 0, 00:13:20.259 "state": "online", 00:13:20.259 "raid_level": "raid1", 00:13:20.259 "superblock": false, 00:13:20.259 "num_base_bdevs": 4, 00:13:20.259 "num_base_bdevs_discovered": 3, 00:13:20.259 "num_base_bdevs_operational": 3, 00:13:20.259 "process": { 00:13:20.259 "type": "rebuild", 00:13:20.259 "target": "spare", 00:13:20.259 "progress": { 00:13:20.259 "blocks": 24576, 00:13:20.259 "percent": 37 00:13:20.259 } 00:13:20.259 }, 00:13:20.259 "base_bdevs_list": [ 00:13:20.259 { 00:13:20.259 "name": "spare", 00:13:20.259 "uuid": "b26a3fe9-f5cc-56d6-9921-49e4a756a2f3", 00:13:20.259 "is_configured": true, 00:13:20.259 "data_offset": 0, 00:13:20.259 "data_size": 65536 00:13:20.259 }, 00:13:20.259 { 00:13:20.259 "name": null, 00:13:20.259 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:20.259 "is_configured": false, 00:13:20.259 "data_offset": 0, 00:13:20.259 "data_size": 65536 00:13:20.259 }, 00:13:20.259 { 00:13:20.259 "name": "BaseBdev3", 00:13:20.259 "uuid": "33571580-0cec-535a-9890-a925ede894a3", 00:13:20.259 "is_configured": true, 00:13:20.259 "data_offset": 0, 00:13:20.259 "data_size": 65536 00:13:20.259 }, 00:13:20.259 { 00:13:20.259 "name": "BaseBdev4", 00:13:20.259 "uuid": "ce53180b-a0cc-5826-94d1-b88032710076", 00:13:20.259 "is_configured": true, 00:13:20.259 "data_offset": 0, 00:13:20.259 "data_size": 65536 00:13:20.259 } 00:13:20.259 ] 00:13:20.259 }' 00:13:20.259 13:23:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:20.259 13:23:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:20.259 13:23:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:20.259 13:23:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:20.259 13:23:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=439 00:13:20.259 13:23:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:20.259 13:23:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:20.259 13:23:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:20.259 13:23:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:20.259 13:23:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:20.259 13:23:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:20.259 13:23:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:20.259 13:23:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.259 13:23:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.259 13:23:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:20.259 13:23:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.259 13:23:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:20.259 "name": "raid_bdev1", 00:13:20.259 "uuid": "d45fc88b-d13e-4424-98f3-34604c4c4582", 00:13:20.259 "strip_size_kb": 0, 00:13:20.259 "state": "online", 00:13:20.259 "raid_level": "raid1", 00:13:20.259 "superblock": false, 00:13:20.259 "num_base_bdevs": 4, 00:13:20.259 "num_base_bdevs_discovered": 3, 00:13:20.259 "num_base_bdevs_operational": 3, 00:13:20.259 "process": { 00:13:20.259 "type": "rebuild", 00:13:20.259 "target": "spare", 00:13:20.259 "progress": { 00:13:20.259 "blocks": 26624, 00:13:20.259 "percent": 40 00:13:20.259 } 00:13:20.259 }, 00:13:20.259 "base_bdevs_list": [ 00:13:20.259 { 00:13:20.259 "name": "spare", 00:13:20.259 "uuid": "b26a3fe9-f5cc-56d6-9921-49e4a756a2f3", 00:13:20.259 "is_configured": true, 00:13:20.259 "data_offset": 0, 00:13:20.259 "data_size": 65536 00:13:20.259 }, 00:13:20.259 { 00:13:20.259 "name": null, 00:13:20.259 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:20.259 "is_configured": false, 00:13:20.259 "data_offset": 0, 00:13:20.259 "data_size": 65536 00:13:20.259 }, 00:13:20.259 { 00:13:20.259 "name": "BaseBdev3", 00:13:20.259 "uuid": "33571580-0cec-535a-9890-a925ede894a3", 00:13:20.259 "is_configured": true, 00:13:20.259 "data_offset": 0, 00:13:20.259 "data_size": 65536 00:13:20.259 }, 00:13:20.259 { 00:13:20.259 "name": "BaseBdev4", 00:13:20.259 "uuid": "ce53180b-a0cc-5826-94d1-b88032710076", 00:13:20.259 "is_configured": true, 00:13:20.259 "data_offset": 0, 00:13:20.259 "data_size": 65536 00:13:20.259 } 00:13:20.259 ] 00:13:20.259 }' 00:13:20.259 13:23:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:20.519 13:23:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:20.519 13:23:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:20.519 13:23:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:20.519 13:23:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:21.458 13:23:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:21.458 13:23:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:21.458 13:23:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:21.458 13:23:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:21.458 13:23:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:21.458 13:23:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:21.458 13:23:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:21.458 13:23:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:21.458 13:23:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.458 13:23:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.458 13:23:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.458 13:23:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:21.458 "name": "raid_bdev1", 00:13:21.458 "uuid": "d45fc88b-d13e-4424-98f3-34604c4c4582", 00:13:21.458 "strip_size_kb": 0, 00:13:21.458 "state": "online", 00:13:21.458 "raid_level": "raid1", 00:13:21.458 "superblock": false, 00:13:21.458 "num_base_bdevs": 4, 00:13:21.458 "num_base_bdevs_discovered": 3, 00:13:21.458 "num_base_bdevs_operational": 3, 00:13:21.458 "process": { 00:13:21.458 "type": "rebuild", 00:13:21.458 "target": "spare", 00:13:21.458 "progress": { 00:13:21.458 "blocks": 49152, 00:13:21.458 "percent": 75 00:13:21.458 } 00:13:21.458 }, 00:13:21.458 "base_bdevs_list": [ 00:13:21.458 { 00:13:21.458 "name": "spare", 00:13:21.458 "uuid": "b26a3fe9-f5cc-56d6-9921-49e4a756a2f3", 00:13:21.458 "is_configured": true, 00:13:21.458 "data_offset": 0, 00:13:21.458 "data_size": 65536 00:13:21.458 }, 00:13:21.458 { 00:13:21.458 "name": null, 00:13:21.458 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:21.458 "is_configured": false, 00:13:21.458 "data_offset": 0, 00:13:21.458 "data_size": 65536 00:13:21.458 }, 00:13:21.458 { 00:13:21.458 "name": "BaseBdev3", 00:13:21.458 "uuid": "33571580-0cec-535a-9890-a925ede894a3", 00:13:21.458 "is_configured": true, 00:13:21.458 "data_offset": 0, 00:13:21.458 "data_size": 65536 00:13:21.458 }, 00:13:21.458 { 00:13:21.458 "name": "BaseBdev4", 00:13:21.458 "uuid": "ce53180b-a0cc-5826-94d1-b88032710076", 00:13:21.458 "is_configured": true, 00:13:21.458 "data_offset": 0, 00:13:21.458 "data_size": 65536 00:13:21.458 } 00:13:21.458 ] 00:13:21.458 }' 00:13:21.458 13:23:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:21.458 13:23:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:21.458 13:23:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:21.458 13:23:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:21.458 13:23:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:22.396 [2024-11-17 13:23:11.289251] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:22.396 [2024-11-17 13:23:11.289335] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:22.396 [2024-11-17 13:23:11.289380] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:22.668 13:23:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:22.668 13:23:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:22.668 13:23:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:22.668 13:23:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:22.668 13:23:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:22.668 13:23:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:22.668 13:23:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:22.668 13:23:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:22.668 13:23:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.668 13:23:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.668 13:23:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.668 13:23:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:22.668 "name": "raid_bdev1", 00:13:22.668 "uuid": "d45fc88b-d13e-4424-98f3-34604c4c4582", 00:13:22.668 "strip_size_kb": 0, 00:13:22.668 "state": "online", 00:13:22.668 "raid_level": "raid1", 00:13:22.668 "superblock": false, 00:13:22.668 "num_base_bdevs": 4, 00:13:22.668 "num_base_bdevs_discovered": 3, 00:13:22.668 "num_base_bdevs_operational": 3, 00:13:22.668 "base_bdevs_list": [ 00:13:22.668 { 00:13:22.668 "name": "spare", 00:13:22.668 "uuid": "b26a3fe9-f5cc-56d6-9921-49e4a756a2f3", 00:13:22.668 "is_configured": true, 00:13:22.668 "data_offset": 0, 00:13:22.668 "data_size": 65536 00:13:22.668 }, 00:13:22.668 { 00:13:22.668 "name": null, 00:13:22.668 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:22.668 "is_configured": false, 00:13:22.668 "data_offset": 0, 00:13:22.668 "data_size": 65536 00:13:22.668 }, 00:13:22.668 { 00:13:22.668 "name": "BaseBdev3", 00:13:22.668 "uuid": "33571580-0cec-535a-9890-a925ede894a3", 00:13:22.668 "is_configured": true, 00:13:22.668 "data_offset": 0, 00:13:22.668 "data_size": 65536 00:13:22.668 }, 00:13:22.668 { 00:13:22.668 "name": "BaseBdev4", 00:13:22.668 "uuid": "ce53180b-a0cc-5826-94d1-b88032710076", 00:13:22.668 "is_configured": true, 00:13:22.668 "data_offset": 0, 00:13:22.668 "data_size": 65536 00:13:22.668 } 00:13:22.668 ] 00:13:22.668 }' 00:13:22.668 13:23:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:22.668 13:23:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:22.668 13:23:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:22.668 13:23:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:22.668 13:23:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:13:22.668 13:23:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:22.668 13:23:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:22.668 13:23:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:22.668 13:23:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:22.668 13:23:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:22.668 13:23:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:22.668 13:23:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:22.668 13:23:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.668 13:23:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.668 13:23:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.668 13:23:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:22.668 "name": "raid_bdev1", 00:13:22.668 "uuid": "d45fc88b-d13e-4424-98f3-34604c4c4582", 00:13:22.668 "strip_size_kb": 0, 00:13:22.668 "state": "online", 00:13:22.668 "raid_level": "raid1", 00:13:22.668 "superblock": false, 00:13:22.668 "num_base_bdevs": 4, 00:13:22.668 "num_base_bdevs_discovered": 3, 00:13:22.668 "num_base_bdevs_operational": 3, 00:13:22.669 "base_bdevs_list": [ 00:13:22.669 { 00:13:22.669 "name": "spare", 00:13:22.669 "uuid": "b26a3fe9-f5cc-56d6-9921-49e4a756a2f3", 00:13:22.669 "is_configured": true, 00:13:22.669 "data_offset": 0, 00:13:22.669 "data_size": 65536 00:13:22.669 }, 00:13:22.669 { 00:13:22.669 "name": null, 00:13:22.669 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:22.669 "is_configured": false, 00:13:22.669 "data_offset": 0, 00:13:22.669 "data_size": 65536 00:13:22.669 }, 00:13:22.669 { 00:13:22.669 "name": "BaseBdev3", 00:13:22.669 "uuid": "33571580-0cec-535a-9890-a925ede894a3", 00:13:22.669 "is_configured": true, 00:13:22.669 "data_offset": 0, 00:13:22.669 "data_size": 65536 00:13:22.669 }, 00:13:22.669 { 00:13:22.669 "name": "BaseBdev4", 00:13:22.669 "uuid": "ce53180b-a0cc-5826-94d1-b88032710076", 00:13:22.669 "is_configured": true, 00:13:22.669 "data_offset": 0, 00:13:22.669 "data_size": 65536 00:13:22.669 } 00:13:22.669 ] 00:13:22.669 }' 00:13:22.669 13:23:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:22.942 13:23:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:22.942 13:23:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:22.942 13:23:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:22.942 13:23:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:22.942 13:23:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:22.942 13:23:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:22.942 13:23:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:22.942 13:23:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:22.942 13:23:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:22.942 13:23:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:22.942 13:23:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:22.942 13:23:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:22.942 13:23:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:22.942 13:23:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:22.942 13:23:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.942 13:23:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:22.942 13:23:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.942 13:23:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.942 13:23:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:22.942 "name": "raid_bdev1", 00:13:22.942 "uuid": "d45fc88b-d13e-4424-98f3-34604c4c4582", 00:13:22.942 "strip_size_kb": 0, 00:13:22.942 "state": "online", 00:13:22.942 "raid_level": "raid1", 00:13:22.942 "superblock": false, 00:13:22.942 "num_base_bdevs": 4, 00:13:22.942 "num_base_bdevs_discovered": 3, 00:13:22.942 "num_base_bdevs_operational": 3, 00:13:22.942 "base_bdevs_list": [ 00:13:22.942 { 00:13:22.942 "name": "spare", 00:13:22.942 "uuid": "b26a3fe9-f5cc-56d6-9921-49e4a756a2f3", 00:13:22.942 "is_configured": true, 00:13:22.942 "data_offset": 0, 00:13:22.942 "data_size": 65536 00:13:22.942 }, 00:13:22.942 { 00:13:22.942 "name": null, 00:13:22.942 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:22.942 "is_configured": false, 00:13:22.942 "data_offset": 0, 00:13:22.942 "data_size": 65536 00:13:22.942 }, 00:13:22.942 { 00:13:22.942 "name": "BaseBdev3", 00:13:22.942 "uuid": "33571580-0cec-535a-9890-a925ede894a3", 00:13:22.942 "is_configured": true, 00:13:22.942 "data_offset": 0, 00:13:22.942 "data_size": 65536 00:13:22.942 }, 00:13:22.942 { 00:13:22.942 "name": "BaseBdev4", 00:13:22.942 "uuid": "ce53180b-a0cc-5826-94d1-b88032710076", 00:13:22.942 "is_configured": true, 00:13:22.942 "data_offset": 0, 00:13:22.942 "data_size": 65536 00:13:22.942 } 00:13:22.942 ] 00:13:22.942 }' 00:13:22.942 13:23:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:22.942 13:23:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.202 13:23:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:23.202 13:23:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.202 13:23:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.202 [2024-11-17 13:23:12.332782] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:23.202 [2024-11-17 13:23:12.332864] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:23.202 [2024-11-17 13:23:12.332968] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:23.202 [2024-11-17 13:23:12.333051] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:23.202 [2024-11-17 13:23:12.333061] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:23.202 13:23:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.202 13:23:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:13:23.202 13:23:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:23.202 13:23:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.202 13:23:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.202 13:23:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.202 13:23:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:23.202 13:23:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:23.202 13:23:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:13:23.202 13:23:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:13:23.202 13:23:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:23.202 13:23:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:13:23.202 13:23:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:23.202 13:23:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:23.202 13:23:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:23.202 13:23:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:13:23.202 13:23:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:23.202 13:23:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:23.202 13:23:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:13:23.462 /dev/nbd0 00:13:23.462 13:23:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:23.462 13:23:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:23.462 13:23:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:23.462 13:23:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:13:23.463 13:23:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:23.463 13:23:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:23.463 13:23:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:23.463 13:23:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:13:23.463 13:23:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:23.463 13:23:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:23.463 13:23:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:23.463 1+0 records in 00:13:23.463 1+0 records out 00:13:23.463 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000391329 s, 10.5 MB/s 00:13:23.463 13:23:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:23.463 13:23:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:13:23.463 13:23:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:23.463 13:23:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:23.463 13:23:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:13:23.463 13:23:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:23.463 13:23:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:23.463 13:23:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:13:23.723 /dev/nbd1 00:13:23.723 13:23:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:23.723 13:23:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:23.723 13:23:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:23.723 13:23:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:13:23.723 13:23:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:23.723 13:23:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:23.723 13:23:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:23.723 13:23:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:13:23.723 13:23:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:23.723 13:23:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:23.723 13:23:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:23.723 1+0 records in 00:13:23.723 1+0 records out 00:13:23.723 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000617886 s, 6.6 MB/s 00:13:23.723 13:23:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:23.723 13:23:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:13:23.723 13:23:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:23.723 13:23:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:23.723 13:23:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:13:23.723 13:23:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:23.723 13:23:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:23.723 13:23:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:13:23.983 13:23:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:13:23.983 13:23:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:23.983 13:23:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:23.983 13:23:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:23.983 13:23:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:13:23.983 13:23:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:23.983 13:23:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:24.242 13:23:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:24.242 13:23:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:24.242 13:23:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:24.243 13:23:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:24.243 13:23:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:24.243 13:23:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:24.243 13:23:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:13:24.243 13:23:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:13:24.243 13:23:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:24.243 13:23:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:24.503 13:23:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:24.503 13:23:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:24.503 13:23:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:24.503 13:23:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:24.503 13:23:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:24.503 13:23:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:24.503 13:23:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:13:24.504 13:23:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:13:24.504 13:23:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:13:24.504 13:23:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 77439 00:13:24.504 13:23:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 77439 ']' 00:13:24.504 13:23:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 77439 00:13:24.504 13:23:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:13:24.504 13:23:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:24.504 13:23:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77439 00:13:24.504 killing process with pid 77439 00:13:24.504 Received shutdown signal, test time was about 60.000000 seconds 00:13:24.504 00:13:24.504 Latency(us) 00:13:24.504 [2024-11-17T13:23:13.728Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:24.504 [2024-11-17T13:23:13.728Z] =================================================================================================================== 00:13:24.504 [2024-11-17T13:23:13.728Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:24.504 13:23:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:24.504 13:23:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:24.504 13:23:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77439' 00:13:24.504 13:23:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@973 -- # kill 77439 00:13:24.504 [2024-11-17 13:23:13.561876] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:24.504 13:23:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@978 -- # wait 77439 00:13:25.072 [2024-11-17 13:23:14.047128] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:26.012 13:23:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:13:26.012 00:13:26.012 real 0m17.344s 00:13:26.012 user 0m18.934s 00:13:26.012 sys 0m3.222s 00:13:26.012 13:23:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:26.012 ************************************ 00:13:26.012 END TEST raid_rebuild_test 00:13:26.012 ************************************ 00:13:26.012 13:23:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.012 13:23:15 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false true 00:13:26.012 13:23:15 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:13:26.012 13:23:15 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:26.012 13:23:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:26.012 ************************************ 00:13:26.012 START TEST raid_rebuild_test_sb 00:13:26.012 ************************************ 00:13:26.012 13:23:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 true false true 00:13:26.012 13:23:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:26.012 13:23:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:13:26.012 13:23:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:13:26.012 13:23:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:13:26.012 13:23:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:26.012 13:23:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:26.012 13:23:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:26.012 13:23:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:26.012 13:23:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:26.012 13:23:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:26.012 13:23:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:26.012 13:23:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:26.012 13:23:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:26.012 13:23:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:13:26.012 13:23:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:26.012 13:23:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:26.012 13:23:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:13:26.012 13:23:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:26.012 13:23:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:26.012 13:23:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:26.012 13:23:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:26.012 13:23:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:26.012 13:23:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:26.012 13:23:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:26.012 13:23:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:26.012 13:23:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:26.012 13:23:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:26.012 13:23:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:26.012 13:23:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:13:26.012 13:23:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:13:26.012 13:23:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=77887 00:13:26.013 13:23:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:26.013 13:23:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 77887 00:13:26.013 13:23:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 77887 ']' 00:13:26.013 13:23:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:26.013 13:23:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:26.013 13:23:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:26.013 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:26.013 13:23:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:26.013 13:23:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.272 [2024-11-17 13:23:15.287616] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:13:26.272 [2024-11-17 13:23:15.287825] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:13:26.272 Zero copy mechanism will not be used. 00:13:26.272 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77887 ] 00:13:26.272 [2024-11-17 13:23:15.457885] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:26.531 [2024-11-17 13:23:15.570858] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:26.791 [2024-11-17 13:23:15.758099] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:26.791 [2024-11-17 13:23:15.758141] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:27.052 13:23:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:27.052 13:23:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:13:27.052 13:23:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:27.052 13:23:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:27.052 13:23:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.052 13:23:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.052 BaseBdev1_malloc 00:13:27.052 13:23:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.052 13:23:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:27.052 13:23:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.052 13:23:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.052 [2024-11-17 13:23:16.152647] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:27.052 [2024-11-17 13:23:16.152727] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:27.052 [2024-11-17 13:23:16.152751] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:27.052 [2024-11-17 13:23:16.152762] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:27.052 [2024-11-17 13:23:16.154764] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:27.052 [2024-11-17 13:23:16.154805] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:27.052 BaseBdev1 00:13:27.052 13:23:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.052 13:23:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:27.052 13:23:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:27.052 13:23:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.052 13:23:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.052 BaseBdev2_malloc 00:13:27.052 13:23:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.052 13:23:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:27.052 13:23:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.052 13:23:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.052 [2024-11-17 13:23:16.207436] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:27.052 [2024-11-17 13:23:16.207533] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:27.052 [2024-11-17 13:23:16.207556] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:27.052 [2024-11-17 13:23:16.207568] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:27.052 [2024-11-17 13:23:16.209643] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:27.052 [2024-11-17 13:23:16.209683] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:27.052 BaseBdev2 00:13:27.052 13:23:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.052 13:23:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:27.052 13:23:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:27.052 13:23:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.052 13:23:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.052 BaseBdev3_malloc 00:13:27.052 13:23:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.052 13:23:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:13:27.052 13:23:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.052 13:23:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.052 [2024-11-17 13:23:16.269705] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:13:27.052 [2024-11-17 13:23:16.269756] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:27.052 [2024-11-17 13:23:16.269793] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:27.052 [2024-11-17 13:23:16.269803] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:27.052 [2024-11-17 13:23:16.271865] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:27.052 [2024-11-17 13:23:16.271959] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:27.312 BaseBdev3 00:13:27.312 13:23:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.312 13:23:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:27.312 13:23:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:27.312 13:23:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.312 13:23:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.312 BaseBdev4_malloc 00:13:27.312 13:23:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.312 13:23:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:13:27.312 13:23:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.312 13:23:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.312 [2024-11-17 13:23:16.325092] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:13:27.312 [2024-11-17 13:23:16.325144] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:27.312 [2024-11-17 13:23:16.325161] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:13:27.312 [2024-11-17 13:23:16.325171] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:27.312 [2024-11-17 13:23:16.327271] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:27.312 [2024-11-17 13:23:16.327344] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:27.312 BaseBdev4 00:13:27.312 13:23:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.312 13:23:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:27.312 13:23:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.312 13:23:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.312 spare_malloc 00:13:27.312 13:23:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.312 13:23:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:27.312 13:23:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.312 13:23:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.312 spare_delay 00:13:27.312 13:23:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.312 13:23:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:27.312 13:23:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.312 13:23:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.312 [2024-11-17 13:23:16.392807] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:27.312 [2024-11-17 13:23:16.392861] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:27.312 [2024-11-17 13:23:16.392895] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:13:27.312 [2024-11-17 13:23:16.392906] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:27.312 [2024-11-17 13:23:16.394912] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:27.312 [2024-11-17 13:23:16.395005] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:27.312 spare 00:13:27.312 13:23:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.312 13:23:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:13:27.312 13:23:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.312 13:23:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.312 [2024-11-17 13:23:16.404840] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:27.312 [2024-11-17 13:23:16.406600] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:27.312 [2024-11-17 13:23:16.406666] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:27.312 [2024-11-17 13:23:16.406718] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:27.313 [2024-11-17 13:23:16.406897] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:27.313 [2024-11-17 13:23:16.406914] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:27.313 [2024-11-17 13:23:16.407142] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:13:27.313 [2024-11-17 13:23:16.407386] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:27.313 [2024-11-17 13:23:16.407419] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:27.313 [2024-11-17 13:23:16.407653] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:27.313 13:23:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.313 13:23:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:13:27.313 13:23:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:27.313 13:23:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:27.313 13:23:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:27.313 13:23:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:27.313 13:23:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:27.313 13:23:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:27.313 13:23:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:27.313 13:23:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:27.313 13:23:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:27.313 13:23:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:27.313 13:23:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.313 13:23:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.313 13:23:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:27.313 13:23:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.313 13:23:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:27.313 "name": "raid_bdev1", 00:13:27.313 "uuid": "70459e7f-0563-4896-b84d-2982f338816f", 00:13:27.313 "strip_size_kb": 0, 00:13:27.313 "state": "online", 00:13:27.313 "raid_level": "raid1", 00:13:27.313 "superblock": true, 00:13:27.313 "num_base_bdevs": 4, 00:13:27.313 "num_base_bdevs_discovered": 4, 00:13:27.313 "num_base_bdevs_operational": 4, 00:13:27.313 "base_bdevs_list": [ 00:13:27.313 { 00:13:27.313 "name": "BaseBdev1", 00:13:27.313 "uuid": "cca70b00-9f57-591e-8db7-a1aabb240787", 00:13:27.313 "is_configured": true, 00:13:27.313 "data_offset": 2048, 00:13:27.313 "data_size": 63488 00:13:27.313 }, 00:13:27.313 { 00:13:27.313 "name": "BaseBdev2", 00:13:27.313 "uuid": "a7d64397-e9f1-5fd6-a89b-556bebae190d", 00:13:27.313 "is_configured": true, 00:13:27.313 "data_offset": 2048, 00:13:27.313 "data_size": 63488 00:13:27.313 }, 00:13:27.313 { 00:13:27.313 "name": "BaseBdev3", 00:13:27.313 "uuid": "8b9c3a12-c695-5848-81b3-b37e080671d7", 00:13:27.313 "is_configured": true, 00:13:27.313 "data_offset": 2048, 00:13:27.313 "data_size": 63488 00:13:27.313 }, 00:13:27.313 { 00:13:27.313 "name": "BaseBdev4", 00:13:27.313 "uuid": "fc6589e2-26ba-5560-9638-ea6ad185db73", 00:13:27.313 "is_configured": true, 00:13:27.313 "data_offset": 2048, 00:13:27.313 "data_size": 63488 00:13:27.313 } 00:13:27.313 ] 00:13:27.313 }' 00:13:27.313 13:23:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:27.313 13:23:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.882 13:23:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:27.882 13:23:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.882 13:23:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.882 13:23:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:27.882 [2024-11-17 13:23:16.832483] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:27.882 13:23:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.882 13:23:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:13:27.882 13:23:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:27.882 13:23:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:27.882 13:23:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.882 13:23:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.882 13:23:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.882 13:23:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:13:27.882 13:23:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:13:27.882 13:23:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:13:27.882 13:23:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:13:27.882 13:23:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:13:27.882 13:23:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:27.882 13:23:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:13:27.882 13:23:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:27.882 13:23:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:27.882 13:23:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:27.882 13:23:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:13:27.882 13:23:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:27.882 13:23:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:27.882 13:23:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:13:28.142 [2024-11-17 13:23:17.111676] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:13:28.142 /dev/nbd0 00:13:28.142 13:23:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:28.142 13:23:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:28.142 13:23:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:28.142 13:23:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:13:28.142 13:23:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:28.142 13:23:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:28.142 13:23:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:28.142 13:23:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:13:28.142 13:23:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:28.142 13:23:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:28.142 13:23:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:28.142 1+0 records in 00:13:28.142 1+0 records out 00:13:28.142 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000232971 s, 17.6 MB/s 00:13:28.142 13:23:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:28.142 13:23:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:13:28.142 13:23:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:28.142 13:23:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:28.142 13:23:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:13:28.142 13:23:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:28.142 13:23:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:28.142 13:23:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:13:28.142 13:23:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:13:28.142 13:23:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:13:33.443 63488+0 records in 00:13:33.443 63488+0 records out 00:13:33.443 32505856 bytes (33 MB, 31 MiB) copied, 5.33405 s, 6.1 MB/s 00:13:33.443 13:23:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:33.443 13:23:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:33.443 13:23:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:33.443 13:23:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:33.443 13:23:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:13:33.443 13:23:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:33.443 13:23:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:33.702 13:23:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:33.702 [2024-11-17 13:23:22.735244] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:33.702 13:23:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:33.702 13:23:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:33.702 13:23:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:33.702 13:23:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:33.702 13:23:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:33.702 13:23:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:33.702 13:23:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:33.702 13:23:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:33.702 13:23:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.702 13:23:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.702 [2024-11-17 13:23:22.751719] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:33.702 13:23:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.702 13:23:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:33.702 13:23:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:33.702 13:23:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:33.702 13:23:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:33.702 13:23:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:33.702 13:23:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:33.702 13:23:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:33.702 13:23:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:33.702 13:23:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:33.702 13:23:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:33.702 13:23:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:33.702 13:23:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:33.702 13:23:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.702 13:23:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.702 13:23:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.702 13:23:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:33.702 "name": "raid_bdev1", 00:13:33.702 "uuid": "70459e7f-0563-4896-b84d-2982f338816f", 00:13:33.702 "strip_size_kb": 0, 00:13:33.702 "state": "online", 00:13:33.702 "raid_level": "raid1", 00:13:33.702 "superblock": true, 00:13:33.702 "num_base_bdevs": 4, 00:13:33.702 "num_base_bdevs_discovered": 3, 00:13:33.702 "num_base_bdevs_operational": 3, 00:13:33.702 "base_bdevs_list": [ 00:13:33.702 { 00:13:33.702 "name": null, 00:13:33.702 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:33.702 "is_configured": false, 00:13:33.702 "data_offset": 0, 00:13:33.702 "data_size": 63488 00:13:33.702 }, 00:13:33.702 { 00:13:33.702 "name": "BaseBdev2", 00:13:33.702 "uuid": "a7d64397-e9f1-5fd6-a89b-556bebae190d", 00:13:33.702 "is_configured": true, 00:13:33.702 "data_offset": 2048, 00:13:33.702 "data_size": 63488 00:13:33.702 }, 00:13:33.702 { 00:13:33.702 "name": "BaseBdev3", 00:13:33.702 "uuid": "8b9c3a12-c695-5848-81b3-b37e080671d7", 00:13:33.702 "is_configured": true, 00:13:33.702 "data_offset": 2048, 00:13:33.702 "data_size": 63488 00:13:33.702 }, 00:13:33.702 { 00:13:33.702 "name": "BaseBdev4", 00:13:33.702 "uuid": "fc6589e2-26ba-5560-9638-ea6ad185db73", 00:13:33.702 "is_configured": true, 00:13:33.702 "data_offset": 2048, 00:13:33.702 "data_size": 63488 00:13:33.702 } 00:13:33.702 ] 00:13:33.702 }' 00:13:33.702 13:23:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:33.702 13:23:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.961 13:23:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:33.961 13:23:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.961 13:23:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.220 [2024-11-17 13:23:23.186981] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:34.220 [2024-11-17 13:23:23.202250] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3500 00:13:34.220 13:23:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.220 [2024-11-17 13:23:23.204205] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:34.220 13:23:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:35.159 13:23:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:35.159 13:23:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:35.159 13:23:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:35.159 13:23:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:35.159 13:23:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:35.159 13:23:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:35.159 13:23:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:35.159 13:23:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.159 13:23:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.159 13:23:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.159 13:23:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:35.159 "name": "raid_bdev1", 00:13:35.159 "uuid": "70459e7f-0563-4896-b84d-2982f338816f", 00:13:35.159 "strip_size_kb": 0, 00:13:35.159 "state": "online", 00:13:35.159 "raid_level": "raid1", 00:13:35.159 "superblock": true, 00:13:35.159 "num_base_bdevs": 4, 00:13:35.159 "num_base_bdevs_discovered": 4, 00:13:35.159 "num_base_bdevs_operational": 4, 00:13:35.159 "process": { 00:13:35.159 "type": "rebuild", 00:13:35.159 "target": "spare", 00:13:35.159 "progress": { 00:13:35.159 "blocks": 20480, 00:13:35.159 "percent": 32 00:13:35.159 } 00:13:35.159 }, 00:13:35.159 "base_bdevs_list": [ 00:13:35.159 { 00:13:35.159 "name": "spare", 00:13:35.159 "uuid": "23d05da3-9781-534a-beba-b6d9889984b0", 00:13:35.159 "is_configured": true, 00:13:35.159 "data_offset": 2048, 00:13:35.159 "data_size": 63488 00:13:35.159 }, 00:13:35.159 { 00:13:35.159 "name": "BaseBdev2", 00:13:35.159 "uuid": "a7d64397-e9f1-5fd6-a89b-556bebae190d", 00:13:35.159 "is_configured": true, 00:13:35.159 "data_offset": 2048, 00:13:35.159 "data_size": 63488 00:13:35.159 }, 00:13:35.159 { 00:13:35.159 "name": "BaseBdev3", 00:13:35.159 "uuid": "8b9c3a12-c695-5848-81b3-b37e080671d7", 00:13:35.159 "is_configured": true, 00:13:35.159 "data_offset": 2048, 00:13:35.159 "data_size": 63488 00:13:35.159 }, 00:13:35.159 { 00:13:35.159 "name": "BaseBdev4", 00:13:35.159 "uuid": "fc6589e2-26ba-5560-9638-ea6ad185db73", 00:13:35.159 "is_configured": true, 00:13:35.159 "data_offset": 2048, 00:13:35.159 "data_size": 63488 00:13:35.159 } 00:13:35.159 ] 00:13:35.159 }' 00:13:35.159 13:23:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:35.159 13:23:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:35.159 13:23:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:35.159 13:23:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:35.159 13:23:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:35.159 13:23:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.159 13:23:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.159 [2024-11-17 13:23:24.339532] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:35.418 [2024-11-17 13:23:24.409902] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:35.418 [2024-11-17 13:23:24.410029] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:35.418 [2024-11-17 13:23:24.410048] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:35.418 [2024-11-17 13:23:24.410061] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:35.418 13:23:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.418 13:23:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:35.418 13:23:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:35.418 13:23:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:35.418 13:23:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:35.418 13:23:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:35.418 13:23:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:35.418 13:23:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:35.418 13:23:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:35.418 13:23:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:35.418 13:23:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:35.418 13:23:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:35.418 13:23:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:35.418 13:23:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.418 13:23:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.418 13:23:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.418 13:23:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:35.418 "name": "raid_bdev1", 00:13:35.418 "uuid": "70459e7f-0563-4896-b84d-2982f338816f", 00:13:35.418 "strip_size_kb": 0, 00:13:35.418 "state": "online", 00:13:35.418 "raid_level": "raid1", 00:13:35.418 "superblock": true, 00:13:35.418 "num_base_bdevs": 4, 00:13:35.418 "num_base_bdevs_discovered": 3, 00:13:35.418 "num_base_bdevs_operational": 3, 00:13:35.418 "base_bdevs_list": [ 00:13:35.418 { 00:13:35.418 "name": null, 00:13:35.418 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:35.418 "is_configured": false, 00:13:35.418 "data_offset": 0, 00:13:35.418 "data_size": 63488 00:13:35.418 }, 00:13:35.418 { 00:13:35.418 "name": "BaseBdev2", 00:13:35.418 "uuid": "a7d64397-e9f1-5fd6-a89b-556bebae190d", 00:13:35.418 "is_configured": true, 00:13:35.418 "data_offset": 2048, 00:13:35.418 "data_size": 63488 00:13:35.418 }, 00:13:35.418 { 00:13:35.418 "name": "BaseBdev3", 00:13:35.418 "uuid": "8b9c3a12-c695-5848-81b3-b37e080671d7", 00:13:35.418 "is_configured": true, 00:13:35.418 "data_offset": 2048, 00:13:35.418 "data_size": 63488 00:13:35.418 }, 00:13:35.418 { 00:13:35.418 "name": "BaseBdev4", 00:13:35.418 "uuid": "fc6589e2-26ba-5560-9638-ea6ad185db73", 00:13:35.418 "is_configured": true, 00:13:35.418 "data_offset": 2048, 00:13:35.418 "data_size": 63488 00:13:35.418 } 00:13:35.418 ] 00:13:35.418 }' 00:13:35.418 13:23:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:35.419 13:23:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.677 13:23:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:35.677 13:23:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:35.677 13:23:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:35.677 13:23:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:35.677 13:23:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:35.677 13:23:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:35.677 13:23:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:35.677 13:23:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.677 13:23:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.677 13:23:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.937 13:23:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:35.937 "name": "raid_bdev1", 00:13:35.937 "uuid": "70459e7f-0563-4896-b84d-2982f338816f", 00:13:35.937 "strip_size_kb": 0, 00:13:35.937 "state": "online", 00:13:35.937 "raid_level": "raid1", 00:13:35.937 "superblock": true, 00:13:35.937 "num_base_bdevs": 4, 00:13:35.937 "num_base_bdevs_discovered": 3, 00:13:35.937 "num_base_bdevs_operational": 3, 00:13:35.937 "base_bdevs_list": [ 00:13:35.937 { 00:13:35.937 "name": null, 00:13:35.937 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:35.937 "is_configured": false, 00:13:35.937 "data_offset": 0, 00:13:35.937 "data_size": 63488 00:13:35.937 }, 00:13:35.937 { 00:13:35.937 "name": "BaseBdev2", 00:13:35.937 "uuid": "a7d64397-e9f1-5fd6-a89b-556bebae190d", 00:13:35.937 "is_configured": true, 00:13:35.937 "data_offset": 2048, 00:13:35.937 "data_size": 63488 00:13:35.937 }, 00:13:35.937 { 00:13:35.937 "name": "BaseBdev3", 00:13:35.937 "uuid": "8b9c3a12-c695-5848-81b3-b37e080671d7", 00:13:35.937 "is_configured": true, 00:13:35.937 "data_offset": 2048, 00:13:35.937 "data_size": 63488 00:13:35.937 }, 00:13:35.937 { 00:13:35.937 "name": "BaseBdev4", 00:13:35.937 "uuid": "fc6589e2-26ba-5560-9638-ea6ad185db73", 00:13:35.937 "is_configured": true, 00:13:35.937 "data_offset": 2048, 00:13:35.937 "data_size": 63488 00:13:35.937 } 00:13:35.937 ] 00:13:35.937 }' 00:13:35.937 13:23:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:35.937 13:23:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:35.937 13:23:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:35.937 13:23:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:35.937 13:23:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:35.937 13:23:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.937 13:23:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.937 [2024-11-17 13:23:24.991836] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:35.937 [2024-11-17 13:23:25.005449] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca35d0 00:13:35.937 13:23:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.937 13:23:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:35.937 [2024-11-17 13:23:25.007300] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:36.876 13:23:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:36.876 13:23:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:36.876 13:23:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:36.876 13:23:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:36.876 13:23:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:36.876 13:23:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:36.877 13:23:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:36.877 13:23:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.877 13:23:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:36.877 13:23:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.877 13:23:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:36.877 "name": "raid_bdev1", 00:13:36.877 "uuid": "70459e7f-0563-4896-b84d-2982f338816f", 00:13:36.877 "strip_size_kb": 0, 00:13:36.877 "state": "online", 00:13:36.877 "raid_level": "raid1", 00:13:36.877 "superblock": true, 00:13:36.877 "num_base_bdevs": 4, 00:13:36.877 "num_base_bdevs_discovered": 4, 00:13:36.877 "num_base_bdevs_operational": 4, 00:13:36.877 "process": { 00:13:36.877 "type": "rebuild", 00:13:36.877 "target": "spare", 00:13:36.877 "progress": { 00:13:36.877 "blocks": 20480, 00:13:36.877 "percent": 32 00:13:36.877 } 00:13:36.877 }, 00:13:36.877 "base_bdevs_list": [ 00:13:36.877 { 00:13:36.877 "name": "spare", 00:13:36.877 "uuid": "23d05da3-9781-534a-beba-b6d9889984b0", 00:13:36.877 "is_configured": true, 00:13:36.877 "data_offset": 2048, 00:13:36.877 "data_size": 63488 00:13:36.877 }, 00:13:36.877 { 00:13:36.877 "name": "BaseBdev2", 00:13:36.877 "uuid": "a7d64397-e9f1-5fd6-a89b-556bebae190d", 00:13:36.877 "is_configured": true, 00:13:36.877 "data_offset": 2048, 00:13:36.877 "data_size": 63488 00:13:36.877 }, 00:13:36.877 { 00:13:36.877 "name": "BaseBdev3", 00:13:36.877 "uuid": "8b9c3a12-c695-5848-81b3-b37e080671d7", 00:13:36.877 "is_configured": true, 00:13:36.877 "data_offset": 2048, 00:13:36.877 "data_size": 63488 00:13:36.877 }, 00:13:36.877 { 00:13:36.877 "name": "BaseBdev4", 00:13:36.877 "uuid": "fc6589e2-26ba-5560-9638-ea6ad185db73", 00:13:36.877 "is_configured": true, 00:13:36.877 "data_offset": 2048, 00:13:36.877 "data_size": 63488 00:13:36.877 } 00:13:36.877 ] 00:13:36.877 }' 00:13:36.877 13:23:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:37.137 13:23:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:37.137 13:23:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:37.137 13:23:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:37.137 13:23:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:13:37.137 13:23:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:13:37.137 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:13:37.137 13:23:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:13:37.137 13:23:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:37.137 13:23:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:13:37.137 13:23:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:37.137 13:23:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.137 13:23:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:37.137 [2024-11-17 13:23:26.171529] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:37.137 [2024-11-17 13:23:26.312471] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000ca35d0 00:13:37.137 13:23:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.137 13:23:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:13:37.137 13:23:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:13:37.137 13:23:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:37.137 13:23:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:37.137 13:23:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:37.137 13:23:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:37.137 13:23:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:37.137 13:23:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:37.137 13:23:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:37.137 13:23:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.137 13:23:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:37.137 13:23:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.396 13:23:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:37.396 "name": "raid_bdev1", 00:13:37.396 "uuid": "70459e7f-0563-4896-b84d-2982f338816f", 00:13:37.396 "strip_size_kb": 0, 00:13:37.396 "state": "online", 00:13:37.396 "raid_level": "raid1", 00:13:37.396 "superblock": true, 00:13:37.397 "num_base_bdevs": 4, 00:13:37.397 "num_base_bdevs_discovered": 3, 00:13:37.397 "num_base_bdevs_operational": 3, 00:13:37.397 "process": { 00:13:37.397 "type": "rebuild", 00:13:37.397 "target": "spare", 00:13:37.397 "progress": { 00:13:37.397 "blocks": 24576, 00:13:37.397 "percent": 38 00:13:37.397 } 00:13:37.397 }, 00:13:37.397 "base_bdevs_list": [ 00:13:37.397 { 00:13:37.397 "name": "spare", 00:13:37.397 "uuid": "23d05da3-9781-534a-beba-b6d9889984b0", 00:13:37.397 "is_configured": true, 00:13:37.397 "data_offset": 2048, 00:13:37.397 "data_size": 63488 00:13:37.397 }, 00:13:37.397 { 00:13:37.397 "name": null, 00:13:37.397 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:37.397 "is_configured": false, 00:13:37.397 "data_offset": 0, 00:13:37.397 "data_size": 63488 00:13:37.397 }, 00:13:37.397 { 00:13:37.397 "name": "BaseBdev3", 00:13:37.397 "uuid": "8b9c3a12-c695-5848-81b3-b37e080671d7", 00:13:37.397 "is_configured": true, 00:13:37.397 "data_offset": 2048, 00:13:37.397 "data_size": 63488 00:13:37.397 }, 00:13:37.397 { 00:13:37.397 "name": "BaseBdev4", 00:13:37.397 "uuid": "fc6589e2-26ba-5560-9638-ea6ad185db73", 00:13:37.397 "is_configured": true, 00:13:37.397 "data_offset": 2048, 00:13:37.397 "data_size": 63488 00:13:37.397 } 00:13:37.397 ] 00:13:37.397 }' 00:13:37.397 13:23:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:37.397 13:23:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:37.397 13:23:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:37.397 13:23:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:37.397 13:23:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=456 00:13:37.397 13:23:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:37.397 13:23:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:37.397 13:23:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:37.397 13:23:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:37.397 13:23:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:37.397 13:23:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:37.397 13:23:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:37.397 13:23:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:37.397 13:23:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.397 13:23:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:37.397 13:23:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.397 13:23:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:37.397 "name": "raid_bdev1", 00:13:37.397 "uuid": "70459e7f-0563-4896-b84d-2982f338816f", 00:13:37.397 "strip_size_kb": 0, 00:13:37.397 "state": "online", 00:13:37.397 "raid_level": "raid1", 00:13:37.397 "superblock": true, 00:13:37.397 "num_base_bdevs": 4, 00:13:37.397 "num_base_bdevs_discovered": 3, 00:13:37.397 "num_base_bdevs_operational": 3, 00:13:37.397 "process": { 00:13:37.397 "type": "rebuild", 00:13:37.397 "target": "spare", 00:13:37.397 "progress": { 00:13:37.397 "blocks": 26624, 00:13:37.397 "percent": 41 00:13:37.397 } 00:13:37.397 }, 00:13:37.397 "base_bdevs_list": [ 00:13:37.397 { 00:13:37.397 "name": "spare", 00:13:37.397 "uuid": "23d05da3-9781-534a-beba-b6d9889984b0", 00:13:37.397 "is_configured": true, 00:13:37.397 "data_offset": 2048, 00:13:37.397 "data_size": 63488 00:13:37.397 }, 00:13:37.397 { 00:13:37.397 "name": null, 00:13:37.397 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:37.397 "is_configured": false, 00:13:37.397 "data_offset": 0, 00:13:37.397 "data_size": 63488 00:13:37.397 }, 00:13:37.397 { 00:13:37.397 "name": "BaseBdev3", 00:13:37.397 "uuid": "8b9c3a12-c695-5848-81b3-b37e080671d7", 00:13:37.397 "is_configured": true, 00:13:37.397 "data_offset": 2048, 00:13:37.397 "data_size": 63488 00:13:37.397 }, 00:13:37.397 { 00:13:37.397 "name": "BaseBdev4", 00:13:37.397 "uuid": "fc6589e2-26ba-5560-9638-ea6ad185db73", 00:13:37.397 "is_configured": true, 00:13:37.397 "data_offset": 2048, 00:13:37.397 "data_size": 63488 00:13:37.397 } 00:13:37.397 ] 00:13:37.397 }' 00:13:37.397 13:23:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:37.397 13:23:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:37.397 13:23:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:37.397 13:23:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:37.397 13:23:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:38.840 13:23:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:38.840 13:23:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:38.840 13:23:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:38.840 13:23:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:38.840 13:23:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:38.840 13:23:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:38.840 13:23:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:38.840 13:23:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:38.840 13:23:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.840 13:23:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:38.840 13:23:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.840 13:23:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:38.840 "name": "raid_bdev1", 00:13:38.840 "uuid": "70459e7f-0563-4896-b84d-2982f338816f", 00:13:38.840 "strip_size_kb": 0, 00:13:38.840 "state": "online", 00:13:38.840 "raid_level": "raid1", 00:13:38.840 "superblock": true, 00:13:38.840 "num_base_bdevs": 4, 00:13:38.840 "num_base_bdevs_discovered": 3, 00:13:38.840 "num_base_bdevs_operational": 3, 00:13:38.840 "process": { 00:13:38.840 "type": "rebuild", 00:13:38.840 "target": "spare", 00:13:38.840 "progress": { 00:13:38.840 "blocks": 49152, 00:13:38.840 "percent": 77 00:13:38.840 } 00:13:38.840 }, 00:13:38.840 "base_bdevs_list": [ 00:13:38.840 { 00:13:38.840 "name": "spare", 00:13:38.840 "uuid": "23d05da3-9781-534a-beba-b6d9889984b0", 00:13:38.840 "is_configured": true, 00:13:38.840 "data_offset": 2048, 00:13:38.840 "data_size": 63488 00:13:38.840 }, 00:13:38.840 { 00:13:38.840 "name": null, 00:13:38.840 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:38.840 "is_configured": false, 00:13:38.840 "data_offset": 0, 00:13:38.840 "data_size": 63488 00:13:38.840 }, 00:13:38.840 { 00:13:38.840 "name": "BaseBdev3", 00:13:38.840 "uuid": "8b9c3a12-c695-5848-81b3-b37e080671d7", 00:13:38.840 "is_configured": true, 00:13:38.840 "data_offset": 2048, 00:13:38.840 "data_size": 63488 00:13:38.840 }, 00:13:38.840 { 00:13:38.840 "name": "BaseBdev4", 00:13:38.840 "uuid": "fc6589e2-26ba-5560-9638-ea6ad185db73", 00:13:38.840 "is_configured": true, 00:13:38.840 "data_offset": 2048, 00:13:38.840 "data_size": 63488 00:13:38.840 } 00:13:38.840 ] 00:13:38.840 }' 00:13:38.840 13:23:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:38.840 13:23:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:38.840 13:23:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:38.840 13:23:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:38.840 13:23:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:39.100 [2024-11-17 13:23:28.220696] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:39.100 [2024-11-17 13:23:28.220888] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:39.100 [2024-11-17 13:23:28.221061] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:39.670 13:23:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:39.670 13:23:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:39.670 13:23:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:39.670 13:23:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:39.670 13:23:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:39.670 13:23:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:39.670 13:23:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:39.670 13:23:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:39.670 13:23:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.670 13:23:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:39.670 13:23:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.670 13:23:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:39.670 "name": "raid_bdev1", 00:13:39.670 "uuid": "70459e7f-0563-4896-b84d-2982f338816f", 00:13:39.670 "strip_size_kb": 0, 00:13:39.670 "state": "online", 00:13:39.670 "raid_level": "raid1", 00:13:39.670 "superblock": true, 00:13:39.670 "num_base_bdevs": 4, 00:13:39.670 "num_base_bdevs_discovered": 3, 00:13:39.670 "num_base_bdevs_operational": 3, 00:13:39.670 "base_bdevs_list": [ 00:13:39.670 { 00:13:39.670 "name": "spare", 00:13:39.670 "uuid": "23d05da3-9781-534a-beba-b6d9889984b0", 00:13:39.670 "is_configured": true, 00:13:39.670 "data_offset": 2048, 00:13:39.670 "data_size": 63488 00:13:39.670 }, 00:13:39.670 { 00:13:39.670 "name": null, 00:13:39.670 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:39.670 "is_configured": false, 00:13:39.670 "data_offset": 0, 00:13:39.670 "data_size": 63488 00:13:39.670 }, 00:13:39.670 { 00:13:39.670 "name": "BaseBdev3", 00:13:39.670 "uuid": "8b9c3a12-c695-5848-81b3-b37e080671d7", 00:13:39.670 "is_configured": true, 00:13:39.670 "data_offset": 2048, 00:13:39.670 "data_size": 63488 00:13:39.670 }, 00:13:39.670 { 00:13:39.670 "name": "BaseBdev4", 00:13:39.670 "uuid": "fc6589e2-26ba-5560-9638-ea6ad185db73", 00:13:39.670 "is_configured": true, 00:13:39.670 "data_offset": 2048, 00:13:39.670 "data_size": 63488 00:13:39.670 } 00:13:39.670 ] 00:13:39.670 }' 00:13:39.670 13:23:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:39.670 13:23:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:39.670 13:23:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:39.670 13:23:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:39.670 13:23:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:13:39.670 13:23:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:39.670 13:23:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:39.670 13:23:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:39.670 13:23:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:39.670 13:23:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:39.670 13:23:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:39.670 13:23:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.670 13:23:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:39.670 13:23:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:39.670 13:23:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.670 13:23:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:39.670 "name": "raid_bdev1", 00:13:39.670 "uuid": "70459e7f-0563-4896-b84d-2982f338816f", 00:13:39.670 "strip_size_kb": 0, 00:13:39.670 "state": "online", 00:13:39.670 "raid_level": "raid1", 00:13:39.670 "superblock": true, 00:13:39.670 "num_base_bdevs": 4, 00:13:39.670 "num_base_bdevs_discovered": 3, 00:13:39.670 "num_base_bdevs_operational": 3, 00:13:39.670 "base_bdevs_list": [ 00:13:39.670 { 00:13:39.670 "name": "spare", 00:13:39.670 "uuid": "23d05da3-9781-534a-beba-b6d9889984b0", 00:13:39.670 "is_configured": true, 00:13:39.670 "data_offset": 2048, 00:13:39.670 "data_size": 63488 00:13:39.670 }, 00:13:39.670 { 00:13:39.670 "name": null, 00:13:39.670 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:39.670 "is_configured": false, 00:13:39.670 "data_offset": 0, 00:13:39.670 "data_size": 63488 00:13:39.670 }, 00:13:39.670 { 00:13:39.670 "name": "BaseBdev3", 00:13:39.670 "uuid": "8b9c3a12-c695-5848-81b3-b37e080671d7", 00:13:39.670 "is_configured": true, 00:13:39.670 "data_offset": 2048, 00:13:39.670 "data_size": 63488 00:13:39.670 }, 00:13:39.670 { 00:13:39.670 "name": "BaseBdev4", 00:13:39.670 "uuid": "fc6589e2-26ba-5560-9638-ea6ad185db73", 00:13:39.670 "is_configured": true, 00:13:39.670 "data_offset": 2048, 00:13:39.670 "data_size": 63488 00:13:39.670 } 00:13:39.670 ] 00:13:39.670 }' 00:13:39.670 13:23:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:39.930 13:23:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:39.930 13:23:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:39.930 13:23:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:39.930 13:23:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:39.930 13:23:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:39.930 13:23:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:39.930 13:23:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:39.930 13:23:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:39.930 13:23:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:39.930 13:23:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:39.930 13:23:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:39.930 13:23:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:39.930 13:23:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:39.930 13:23:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:39.930 13:23:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.930 13:23:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:39.930 13:23:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:39.930 13:23:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.930 13:23:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:39.930 "name": "raid_bdev1", 00:13:39.930 "uuid": "70459e7f-0563-4896-b84d-2982f338816f", 00:13:39.930 "strip_size_kb": 0, 00:13:39.930 "state": "online", 00:13:39.930 "raid_level": "raid1", 00:13:39.930 "superblock": true, 00:13:39.930 "num_base_bdevs": 4, 00:13:39.930 "num_base_bdevs_discovered": 3, 00:13:39.930 "num_base_bdevs_operational": 3, 00:13:39.930 "base_bdevs_list": [ 00:13:39.930 { 00:13:39.930 "name": "spare", 00:13:39.930 "uuid": "23d05da3-9781-534a-beba-b6d9889984b0", 00:13:39.930 "is_configured": true, 00:13:39.930 "data_offset": 2048, 00:13:39.930 "data_size": 63488 00:13:39.930 }, 00:13:39.930 { 00:13:39.930 "name": null, 00:13:39.930 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:39.930 "is_configured": false, 00:13:39.930 "data_offset": 0, 00:13:39.930 "data_size": 63488 00:13:39.930 }, 00:13:39.930 { 00:13:39.930 "name": "BaseBdev3", 00:13:39.930 "uuid": "8b9c3a12-c695-5848-81b3-b37e080671d7", 00:13:39.930 "is_configured": true, 00:13:39.930 "data_offset": 2048, 00:13:39.930 "data_size": 63488 00:13:39.930 }, 00:13:39.930 { 00:13:39.930 "name": "BaseBdev4", 00:13:39.930 "uuid": "fc6589e2-26ba-5560-9638-ea6ad185db73", 00:13:39.930 "is_configured": true, 00:13:39.930 "data_offset": 2048, 00:13:39.930 "data_size": 63488 00:13:39.930 } 00:13:39.930 ] 00:13:39.930 }' 00:13:39.930 13:23:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:39.930 13:23:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:40.497 13:23:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:40.497 13:23:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.497 13:23:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:40.497 [2024-11-17 13:23:29.445104] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:40.497 [2024-11-17 13:23:29.445184] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:40.497 [2024-11-17 13:23:29.445293] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:40.497 [2024-11-17 13:23:29.445412] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:40.497 [2024-11-17 13:23:29.445467] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:40.497 13:23:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.497 13:23:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:40.497 13:23:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.497 13:23:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:13:40.497 13:23:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:40.497 13:23:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.497 13:23:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:40.497 13:23:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:40.497 13:23:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:13:40.497 13:23:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:13:40.497 13:23:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:40.497 13:23:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:13:40.497 13:23:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:40.497 13:23:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:40.497 13:23:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:40.497 13:23:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:13:40.497 13:23:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:40.497 13:23:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:40.497 13:23:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:13:40.497 /dev/nbd0 00:13:40.757 13:23:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:40.757 13:23:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:40.757 13:23:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:40.757 13:23:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:13:40.757 13:23:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:40.757 13:23:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:40.757 13:23:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:40.757 13:23:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:13:40.757 13:23:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:40.757 13:23:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:40.757 13:23:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:40.757 1+0 records in 00:13:40.757 1+0 records out 00:13:40.757 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000308064 s, 13.3 MB/s 00:13:40.757 13:23:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:40.757 13:23:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:13:40.757 13:23:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:40.757 13:23:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:40.757 13:23:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:13:40.757 13:23:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:40.757 13:23:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:40.757 13:23:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:13:40.757 /dev/nbd1 00:13:40.757 13:23:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:40.757 13:23:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:40.757 13:23:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:40.757 13:23:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:13:41.017 13:23:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:41.017 13:23:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:41.017 13:23:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:41.017 13:23:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:13:41.017 13:23:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:41.017 13:23:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:41.017 13:23:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:41.017 1+0 records in 00:13:41.017 1+0 records out 00:13:41.017 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000287848 s, 14.2 MB/s 00:13:41.017 13:23:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:41.017 13:23:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:13:41.017 13:23:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:41.017 13:23:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:41.017 13:23:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:13:41.017 13:23:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:41.017 13:23:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:41.017 13:23:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:13:41.017 13:23:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:13:41.017 13:23:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:41.017 13:23:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:41.017 13:23:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:41.017 13:23:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:13:41.017 13:23:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:41.017 13:23:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:41.276 13:23:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:41.276 13:23:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:41.276 13:23:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:41.276 13:23:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:41.276 13:23:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:41.276 13:23:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:41.276 13:23:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:41.276 13:23:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:41.276 13:23:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:41.276 13:23:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:41.535 13:23:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:41.535 13:23:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:41.535 13:23:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:41.535 13:23:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:41.535 13:23:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:41.535 13:23:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:41.535 13:23:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:41.535 13:23:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:41.535 13:23:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:13:41.535 13:23:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:13:41.535 13:23:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.535 13:23:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:41.535 13:23:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.535 13:23:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:41.535 13:23:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.535 13:23:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:41.535 [2024-11-17 13:23:30.629400] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:41.535 [2024-11-17 13:23:30.629456] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:41.535 [2024-11-17 13:23:30.629496] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:13:41.535 [2024-11-17 13:23:30.629505] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:41.535 [2024-11-17 13:23:30.631624] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:41.535 [2024-11-17 13:23:30.631711] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:41.535 [2024-11-17 13:23:30.631828] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:41.535 [2024-11-17 13:23:30.631885] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:41.535 [2024-11-17 13:23:30.632017] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:41.535 [2024-11-17 13:23:30.632113] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:41.535 spare 00:13:41.535 13:23:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.535 13:23:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:13:41.535 13:23:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.535 13:23:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:41.535 [2024-11-17 13:23:30.732002] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:13:41.535 [2024-11-17 13:23:30.732063] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:41.535 [2024-11-17 13:23:30.732407] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:13:41.535 [2024-11-17 13:23:30.732595] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:13:41.535 [2024-11-17 13:23:30.732608] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:13:41.535 [2024-11-17 13:23:30.732779] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:41.535 13:23:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.535 13:23:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:41.535 13:23:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:41.535 13:23:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:41.535 13:23:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:41.535 13:23:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:41.535 13:23:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:41.535 13:23:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:41.535 13:23:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:41.535 13:23:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:41.535 13:23:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:41.535 13:23:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:41.535 13:23:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.535 13:23:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:41.535 13:23:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:41.795 13:23:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.795 13:23:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:41.795 "name": "raid_bdev1", 00:13:41.795 "uuid": "70459e7f-0563-4896-b84d-2982f338816f", 00:13:41.795 "strip_size_kb": 0, 00:13:41.795 "state": "online", 00:13:41.795 "raid_level": "raid1", 00:13:41.795 "superblock": true, 00:13:41.795 "num_base_bdevs": 4, 00:13:41.795 "num_base_bdevs_discovered": 3, 00:13:41.795 "num_base_bdevs_operational": 3, 00:13:41.795 "base_bdevs_list": [ 00:13:41.795 { 00:13:41.795 "name": "spare", 00:13:41.795 "uuid": "23d05da3-9781-534a-beba-b6d9889984b0", 00:13:41.795 "is_configured": true, 00:13:41.795 "data_offset": 2048, 00:13:41.795 "data_size": 63488 00:13:41.795 }, 00:13:41.795 { 00:13:41.795 "name": null, 00:13:41.795 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:41.795 "is_configured": false, 00:13:41.795 "data_offset": 2048, 00:13:41.795 "data_size": 63488 00:13:41.795 }, 00:13:41.795 { 00:13:41.795 "name": "BaseBdev3", 00:13:41.795 "uuid": "8b9c3a12-c695-5848-81b3-b37e080671d7", 00:13:41.795 "is_configured": true, 00:13:41.795 "data_offset": 2048, 00:13:41.795 "data_size": 63488 00:13:41.795 }, 00:13:41.795 { 00:13:41.795 "name": "BaseBdev4", 00:13:41.795 "uuid": "fc6589e2-26ba-5560-9638-ea6ad185db73", 00:13:41.795 "is_configured": true, 00:13:41.795 "data_offset": 2048, 00:13:41.795 "data_size": 63488 00:13:41.795 } 00:13:41.795 ] 00:13:41.795 }' 00:13:41.795 13:23:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:41.795 13:23:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:42.054 13:23:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:42.054 13:23:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:42.054 13:23:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:42.054 13:23:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:42.054 13:23:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:42.054 13:23:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:42.054 13:23:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:42.054 13:23:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.054 13:23:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:42.054 13:23:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.054 13:23:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:42.054 "name": "raid_bdev1", 00:13:42.054 "uuid": "70459e7f-0563-4896-b84d-2982f338816f", 00:13:42.054 "strip_size_kb": 0, 00:13:42.054 "state": "online", 00:13:42.054 "raid_level": "raid1", 00:13:42.054 "superblock": true, 00:13:42.054 "num_base_bdevs": 4, 00:13:42.054 "num_base_bdevs_discovered": 3, 00:13:42.054 "num_base_bdevs_operational": 3, 00:13:42.054 "base_bdevs_list": [ 00:13:42.054 { 00:13:42.054 "name": "spare", 00:13:42.054 "uuid": "23d05da3-9781-534a-beba-b6d9889984b0", 00:13:42.054 "is_configured": true, 00:13:42.054 "data_offset": 2048, 00:13:42.054 "data_size": 63488 00:13:42.054 }, 00:13:42.054 { 00:13:42.054 "name": null, 00:13:42.054 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:42.054 "is_configured": false, 00:13:42.054 "data_offset": 2048, 00:13:42.054 "data_size": 63488 00:13:42.054 }, 00:13:42.054 { 00:13:42.054 "name": "BaseBdev3", 00:13:42.054 "uuid": "8b9c3a12-c695-5848-81b3-b37e080671d7", 00:13:42.054 "is_configured": true, 00:13:42.054 "data_offset": 2048, 00:13:42.054 "data_size": 63488 00:13:42.054 }, 00:13:42.054 { 00:13:42.054 "name": "BaseBdev4", 00:13:42.054 "uuid": "fc6589e2-26ba-5560-9638-ea6ad185db73", 00:13:42.054 "is_configured": true, 00:13:42.054 "data_offset": 2048, 00:13:42.054 "data_size": 63488 00:13:42.054 } 00:13:42.054 ] 00:13:42.054 }' 00:13:42.054 13:23:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:42.313 13:23:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:42.313 13:23:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:42.313 13:23:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:42.313 13:23:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:42.313 13:23:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.313 13:23:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:42.313 13:23:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:13:42.313 13:23:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.313 13:23:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:13:42.313 13:23:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:42.313 13:23:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.313 13:23:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:42.313 [2024-11-17 13:23:31.400145] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:42.313 13:23:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.313 13:23:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:42.313 13:23:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:42.313 13:23:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:42.313 13:23:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:42.313 13:23:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:42.313 13:23:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:42.313 13:23:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:42.313 13:23:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:42.313 13:23:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:42.313 13:23:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:42.313 13:23:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:42.313 13:23:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.313 13:23:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:42.313 13:23:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:42.313 13:23:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.313 13:23:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:42.313 "name": "raid_bdev1", 00:13:42.313 "uuid": "70459e7f-0563-4896-b84d-2982f338816f", 00:13:42.313 "strip_size_kb": 0, 00:13:42.313 "state": "online", 00:13:42.313 "raid_level": "raid1", 00:13:42.313 "superblock": true, 00:13:42.313 "num_base_bdevs": 4, 00:13:42.313 "num_base_bdevs_discovered": 2, 00:13:42.313 "num_base_bdevs_operational": 2, 00:13:42.313 "base_bdevs_list": [ 00:13:42.313 { 00:13:42.313 "name": null, 00:13:42.313 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:42.313 "is_configured": false, 00:13:42.313 "data_offset": 0, 00:13:42.313 "data_size": 63488 00:13:42.313 }, 00:13:42.313 { 00:13:42.313 "name": null, 00:13:42.313 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:42.313 "is_configured": false, 00:13:42.313 "data_offset": 2048, 00:13:42.313 "data_size": 63488 00:13:42.313 }, 00:13:42.313 { 00:13:42.313 "name": "BaseBdev3", 00:13:42.313 "uuid": "8b9c3a12-c695-5848-81b3-b37e080671d7", 00:13:42.313 "is_configured": true, 00:13:42.313 "data_offset": 2048, 00:13:42.313 "data_size": 63488 00:13:42.313 }, 00:13:42.313 { 00:13:42.313 "name": "BaseBdev4", 00:13:42.313 "uuid": "fc6589e2-26ba-5560-9638-ea6ad185db73", 00:13:42.313 "is_configured": true, 00:13:42.313 "data_offset": 2048, 00:13:42.313 "data_size": 63488 00:13:42.313 } 00:13:42.313 ] 00:13:42.313 }' 00:13:42.313 13:23:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:42.313 13:23:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:42.883 13:23:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:42.883 13:23:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.883 13:23:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:42.883 [2024-11-17 13:23:31.855378] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:42.883 [2024-11-17 13:23:31.855621] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:13:42.883 [2024-11-17 13:23:31.855682] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:42.883 [2024-11-17 13:23:31.855762] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:42.883 [2024-11-17 13:23:31.869927] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1d50 00:13:42.883 13:23:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.883 13:23:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:13:42.883 [2024-11-17 13:23:31.871926] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:43.821 13:23:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:43.821 13:23:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:43.821 13:23:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:43.821 13:23:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:43.821 13:23:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:43.822 13:23:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:43.822 13:23:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:43.822 13:23:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.822 13:23:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:43.822 13:23:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.822 13:23:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:43.822 "name": "raid_bdev1", 00:13:43.822 "uuid": "70459e7f-0563-4896-b84d-2982f338816f", 00:13:43.822 "strip_size_kb": 0, 00:13:43.822 "state": "online", 00:13:43.822 "raid_level": "raid1", 00:13:43.822 "superblock": true, 00:13:43.822 "num_base_bdevs": 4, 00:13:43.822 "num_base_bdevs_discovered": 3, 00:13:43.822 "num_base_bdevs_operational": 3, 00:13:43.822 "process": { 00:13:43.822 "type": "rebuild", 00:13:43.822 "target": "spare", 00:13:43.822 "progress": { 00:13:43.822 "blocks": 20480, 00:13:43.822 "percent": 32 00:13:43.822 } 00:13:43.822 }, 00:13:43.822 "base_bdevs_list": [ 00:13:43.822 { 00:13:43.822 "name": "spare", 00:13:43.822 "uuid": "23d05da3-9781-534a-beba-b6d9889984b0", 00:13:43.822 "is_configured": true, 00:13:43.822 "data_offset": 2048, 00:13:43.822 "data_size": 63488 00:13:43.822 }, 00:13:43.822 { 00:13:43.822 "name": null, 00:13:43.822 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:43.822 "is_configured": false, 00:13:43.822 "data_offset": 2048, 00:13:43.822 "data_size": 63488 00:13:43.822 }, 00:13:43.822 { 00:13:43.822 "name": "BaseBdev3", 00:13:43.822 "uuid": "8b9c3a12-c695-5848-81b3-b37e080671d7", 00:13:43.822 "is_configured": true, 00:13:43.822 "data_offset": 2048, 00:13:43.822 "data_size": 63488 00:13:43.822 }, 00:13:43.822 { 00:13:43.822 "name": "BaseBdev4", 00:13:43.822 "uuid": "fc6589e2-26ba-5560-9638-ea6ad185db73", 00:13:43.822 "is_configured": true, 00:13:43.822 "data_offset": 2048, 00:13:43.822 "data_size": 63488 00:13:43.822 } 00:13:43.822 ] 00:13:43.822 }' 00:13:43.822 13:23:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:43.822 13:23:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:43.822 13:23:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:43.822 13:23:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:43.822 13:23:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:13:43.822 13:23:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.822 13:23:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:43.822 [2024-11-17 13:23:33.019335] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:44.082 [2024-11-17 13:23:33.076785] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:44.082 [2024-11-17 13:23:33.076907] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:44.082 [2024-11-17 13:23:33.076947] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:44.082 [2024-11-17 13:23:33.076984] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:44.082 13:23:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.082 13:23:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:44.082 13:23:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:44.082 13:23:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:44.082 13:23:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:44.082 13:23:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:44.082 13:23:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:44.082 13:23:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:44.082 13:23:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:44.082 13:23:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:44.082 13:23:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:44.082 13:23:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:44.082 13:23:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.082 13:23:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:44.082 13:23:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:44.082 13:23:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.082 13:23:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:44.082 "name": "raid_bdev1", 00:13:44.082 "uuid": "70459e7f-0563-4896-b84d-2982f338816f", 00:13:44.082 "strip_size_kb": 0, 00:13:44.082 "state": "online", 00:13:44.082 "raid_level": "raid1", 00:13:44.082 "superblock": true, 00:13:44.082 "num_base_bdevs": 4, 00:13:44.082 "num_base_bdevs_discovered": 2, 00:13:44.082 "num_base_bdevs_operational": 2, 00:13:44.082 "base_bdevs_list": [ 00:13:44.082 { 00:13:44.082 "name": null, 00:13:44.082 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:44.082 "is_configured": false, 00:13:44.082 "data_offset": 0, 00:13:44.082 "data_size": 63488 00:13:44.082 }, 00:13:44.082 { 00:13:44.082 "name": null, 00:13:44.082 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:44.082 "is_configured": false, 00:13:44.082 "data_offset": 2048, 00:13:44.082 "data_size": 63488 00:13:44.082 }, 00:13:44.082 { 00:13:44.082 "name": "BaseBdev3", 00:13:44.082 "uuid": "8b9c3a12-c695-5848-81b3-b37e080671d7", 00:13:44.082 "is_configured": true, 00:13:44.082 "data_offset": 2048, 00:13:44.082 "data_size": 63488 00:13:44.082 }, 00:13:44.082 { 00:13:44.082 "name": "BaseBdev4", 00:13:44.082 "uuid": "fc6589e2-26ba-5560-9638-ea6ad185db73", 00:13:44.082 "is_configured": true, 00:13:44.082 "data_offset": 2048, 00:13:44.082 "data_size": 63488 00:13:44.082 } 00:13:44.082 ] 00:13:44.082 }' 00:13:44.082 13:23:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:44.082 13:23:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:44.342 13:23:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:44.342 13:23:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.342 13:23:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:44.342 [2024-11-17 13:23:33.565992] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:44.342 [2024-11-17 13:23:33.566126] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:44.342 [2024-11-17 13:23:33.566192] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:13:44.342 [2024-11-17 13:23:33.566244] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:44.342 [2024-11-17 13:23:33.566825] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:44.342 [2024-11-17 13:23:33.566899] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:44.342 [2024-11-17 13:23:33.567068] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:44.602 [2024-11-17 13:23:33.567115] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:13:44.602 [2024-11-17 13:23:33.567174] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:44.602 [2024-11-17 13:23:33.567278] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:44.602 [2024-11-17 13:23:33.581902] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1e20 00:13:44.602 spare 00:13:44.602 13:23:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.602 13:23:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:13:44.602 [2024-11-17 13:23:33.584094] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:45.541 13:23:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:45.541 13:23:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:45.541 13:23:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:45.541 13:23:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:45.541 13:23:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:45.541 13:23:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:45.541 13:23:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:45.541 13:23:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.541 13:23:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:45.541 13:23:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.541 13:23:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:45.541 "name": "raid_bdev1", 00:13:45.541 "uuid": "70459e7f-0563-4896-b84d-2982f338816f", 00:13:45.541 "strip_size_kb": 0, 00:13:45.541 "state": "online", 00:13:45.541 "raid_level": "raid1", 00:13:45.541 "superblock": true, 00:13:45.541 "num_base_bdevs": 4, 00:13:45.541 "num_base_bdevs_discovered": 3, 00:13:45.541 "num_base_bdevs_operational": 3, 00:13:45.541 "process": { 00:13:45.541 "type": "rebuild", 00:13:45.541 "target": "spare", 00:13:45.541 "progress": { 00:13:45.541 "blocks": 20480, 00:13:45.541 "percent": 32 00:13:45.541 } 00:13:45.541 }, 00:13:45.541 "base_bdevs_list": [ 00:13:45.541 { 00:13:45.541 "name": "spare", 00:13:45.541 "uuid": "23d05da3-9781-534a-beba-b6d9889984b0", 00:13:45.541 "is_configured": true, 00:13:45.541 "data_offset": 2048, 00:13:45.541 "data_size": 63488 00:13:45.541 }, 00:13:45.541 { 00:13:45.541 "name": null, 00:13:45.541 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:45.541 "is_configured": false, 00:13:45.541 "data_offset": 2048, 00:13:45.541 "data_size": 63488 00:13:45.541 }, 00:13:45.541 { 00:13:45.541 "name": "BaseBdev3", 00:13:45.541 "uuid": "8b9c3a12-c695-5848-81b3-b37e080671d7", 00:13:45.541 "is_configured": true, 00:13:45.541 "data_offset": 2048, 00:13:45.541 "data_size": 63488 00:13:45.541 }, 00:13:45.541 { 00:13:45.541 "name": "BaseBdev4", 00:13:45.541 "uuid": "fc6589e2-26ba-5560-9638-ea6ad185db73", 00:13:45.541 "is_configured": true, 00:13:45.541 "data_offset": 2048, 00:13:45.541 "data_size": 63488 00:13:45.541 } 00:13:45.541 ] 00:13:45.541 }' 00:13:45.541 13:23:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:45.541 13:23:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:45.541 13:23:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:45.541 13:23:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:45.541 13:23:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:13:45.541 13:23:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.541 13:23:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:45.541 [2024-11-17 13:23:34.743526] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:45.800 [2024-11-17 13:23:34.789729] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:45.800 [2024-11-17 13:23:34.789884] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:45.800 [2024-11-17 13:23:34.789903] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:45.800 [2024-11-17 13:23:34.789912] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:45.800 13:23:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.800 13:23:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:45.800 13:23:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:45.800 13:23:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:45.800 13:23:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:45.800 13:23:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:45.801 13:23:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:45.801 13:23:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:45.801 13:23:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:45.801 13:23:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:45.801 13:23:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:45.801 13:23:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:45.801 13:23:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:45.801 13:23:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.801 13:23:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:45.801 13:23:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.801 13:23:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:45.801 "name": "raid_bdev1", 00:13:45.801 "uuid": "70459e7f-0563-4896-b84d-2982f338816f", 00:13:45.801 "strip_size_kb": 0, 00:13:45.801 "state": "online", 00:13:45.801 "raid_level": "raid1", 00:13:45.801 "superblock": true, 00:13:45.801 "num_base_bdevs": 4, 00:13:45.801 "num_base_bdevs_discovered": 2, 00:13:45.801 "num_base_bdevs_operational": 2, 00:13:45.801 "base_bdevs_list": [ 00:13:45.801 { 00:13:45.801 "name": null, 00:13:45.801 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:45.801 "is_configured": false, 00:13:45.801 "data_offset": 0, 00:13:45.801 "data_size": 63488 00:13:45.801 }, 00:13:45.801 { 00:13:45.801 "name": null, 00:13:45.801 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:45.801 "is_configured": false, 00:13:45.801 "data_offset": 2048, 00:13:45.801 "data_size": 63488 00:13:45.801 }, 00:13:45.801 { 00:13:45.801 "name": "BaseBdev3", 00:13:45.801 "uuid": "8b9c3a12-c695-5848-81b3-b37e080671d7", 00:13:45.801 "is_configured": true, 00:13:45.801 "data_offset": 2048, 00:13:45.801 "data_size": 63488 00:13:45.801 }, 00:13:45.801 { 00:13:45.801 "name": "BaseBdev4", 00:13:45.801 "uuid": "fc6589e2-26ba-5560-9638-ea6ad185db73", 00:13:45.801 "is_configured": true, 00:13:45.801 "data_offset": 2048, 00:13:45.801 "data_size": 63488 00:13:45.801 } 00:13:45.801 ] 00:13:45.801 }' 00:13:45.801 13:23:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:45.801 13:23:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:46.060 13:23:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:46.060 13:23:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:46.060 13:23:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:46.060 13:23:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:46.060 13:23:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:46.060 13:23:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:46.060 13:23:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:46.060 13:23:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.060 13:23:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:46.060 13:23:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.060 13:23:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:46.060 "name": "raid_bdev1", 00:13:46.060 "uuid": "70459e7f-0563-4896-b84d-2982f338816f", 00:13:46.060 "strip_size_kb": 0, 00:13:46.060 "state": "online", 00:13:46.060 "raid_level": "raid1", 00:13:46.060 "superblock": true, 00:13:46.060 "num_base_bdevs": 4, 00:13:46.060 "num_base_bdevs_discovered": 2, 00:13:46.060 "num_base_bdevs_operational": 2, 00:13:46.060 "base_bdevs_list": [ 00:13:46.060 { 00:13:46.060 "name": null, 00:13:46.060 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:46.060 "is_configured": false, 00:13:46.060 "data_offset": 0, 00:13:46.060 "data_size": 63488 00:13:46.060 }, 00:13:46.060 { 00:13:46.060 "name": null, 00:13:46.060 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:46.060 "is_configured": false, 00:13:46.060 "data_offset": 2048, 00:13:46.060 "data_size": 63488 00:13:46.060 }, 00:13:46.060 { 00:13:46.060 "name": "BaseBdev3", 00:13:46.060 "uuid": "8b9c3a12-c695-5848-81b3-b37e080671d7", 00:13:46.060 "is_configured": true, 00:13:46.060 "data_offset": 2048, 00:13:46.060 "data_size": 63488 00:13:46.060 }, 00:13:46.060 { 00:13:46.060 "name": "BaseBdev4", 00:13:46.060 "uuid": "fc6589e2-26ba-5560-9638-ea6ad185db73", 00:13:46.060 "is_configured": true, 00:13:46.060 "data_offset": 2048, 00:13:46.060 "data_size": 63488 00:13:46.060 } 00:13:46.060 ] 00:13:46.060 }' 00:13:46.060 13:23:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:46.319 13:23:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:46.319 13:23:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:46.319 13:23:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:46.319 13:23:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:13:46.319 13:23:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.319 13:23:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:46.319 13:23:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.319 13:23:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:46.319 13:23:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.319 13:23:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:46.319 [2024-11-17 13:23:35.377905] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:46.319 [2024-11-17 13:23:35.378009] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:46.319 [2024-11-17 13:23:35.378034] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:13:46.319 [2024-11-17 13:23:35.378045] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:46.319 [2024-11-17 13:23:35.378537] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:46.319 [2024-11-17 13:23:35.378560] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:46.319 [2024-11-17 13:23:35.378642] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:13:46.319 [2024-11-17 13:23:35.378658] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:13:46.319 [2024-11-17 13:23:35.378666] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:46.319 [2024-11-17 13:23:35.378693] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:13:46.319 BaseBdev1 00:13:46.320 13:23:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.320 13:23:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:13:47.265 13:23:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:47.265 13:23:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:47.265 13:23:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:47.265 13:23:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:47.265 13:23:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:47.265 13:23:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:47.265 13:23:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:47.265 13:23:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:47.265 13:23:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:47.265 13:23:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:47.265 13:23:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:47.265 13:23:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:47.265 13:23:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.265 13:23:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.265 13:23:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.265 13:23:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:47.265 "name": "raid_bdev1", 00:13:47.265 "uuid": "70459e7f-0563-4896-b84d-2982f338816f", 00:13:47.265 "strip_size_kb": 0, 00:13:47.265 "state": "online", 00:13:47.265 "raid_level": "raid1", 00:13:47.265 "superblock": true, 00:13:47.265 "num_base_bdevs": 4, 00:13:47.265 "num_base_bdevs_discovered": 2, 00:13:47.265 "num_base_bdevs_operational": 2, 00:13:47.265 "base_bdevs_list": [ 00:13:47.265 { 00:13:47.265 "name": null, 00:13:47.265 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:47.265 "is_configured": false, 00:13:47.265 "data_offset": 0, 00:13:47.265 "data_size": 63488 00:13:47.265 }, 00:13:47.265 { 00:13:47.265 "name": null, 00:13:47.265 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:47.265 "is_configured": false, 00:13:47.265 "data_offset": 2048, 00:13:47.265 "data_size": 63488 00:13:47.265 }, 00:13:47.265 { 00:13:47.265 "name": "BaseBdev3", 00:13:47.265 "uuid": "8b9c3a12-c695-5848-81b3-b37e080671d7", 00:13:47.265 "is_configured": true, 00:13:47.265 "data_offset": 2048, 00:13:47.265 "data_size": 63488 00:13:47.265 }, 00:13:47.265 { 00:13:47.265 "name": "BaseBdev4", 00:13:47.265 "uuid": "fc6589e2-26ba-5560-9638-ea6ad185db73", 00:13:47.265 "is_configured": true, 00:13:47.265 "data_offset": 2048, 00:13:47.265 "data_size": 63488 00:13:47.265 } 00:13:47.265 ] 00:13:47.265 }' 00:13:47.265 13:23:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:47.265 13:23:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.835 13:23:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:47.835 13:23:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:47.835 13:23:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:47.835 13:23:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:47.835 13:23:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:47.835 13:23:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:47.835 13:23:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:47.835 13:23:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.835 13:23:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.835 13:23:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.835 13:23:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:47.835 "name": "raid_bdev1", 00:13:47.835 "uuid": "70459e7f-0563-4896-b84d-2982f338816f", 00:13:47.835 "strip_size_kb": 0, 00:13:47.835 "state": "online", 00:13:47.835 "raid_level": "raid1", 00:13:47.835 "superblock": true, 00:13:47.835 "num_base_bdevs": 4, 00:13:47.835 "num_base_bdevs_discovered": 2, 00:13:47.835 "num_base_bdevs_operational": 2, 00:13:47.835 "base_bdevs_list": [ 00:13:47.835 { 00:13:47.835 "name": null, 00:13:47.835 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:47.835 "is_configured": false, 00:13:47.835 "data_offset": 0, 00:13:47.835 "data_size": 63488 00:13:47.835 }, 00:13:47.835 { 00:13:47.835 "name": null, 00:13:47.835 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:47.835 "is_configured": false, 00:13:47.835 "data_offset": 2048, 00:13:47.835 "data_size": 63488 00:13:47.835 }, 00:13:47.835 { 00:13:47.835 "name": "BaseBdev3", 00:13:47.835 "uuid": "8b9c3a12-c695-5848-81b3-b37e080671d7", 00:13:47.835 "is_configured": true, 00:13:47.835 "data_offset": 2048, 00:13:47.835 "data_size": 63488 00:13:47.835 }, 00:13:47.835 { 00:13:47.835 "name": "BaseBdev4", 00:13:47.835 "uuid": "fc6589e2-26ba-5560-9638-ea6ad185db73", 00:13:47.835 "is_configured": true, 00:13:47.835 "data_offset": 2048, 00:13:47.835 "data_size": 63488 00:13:47.835 } 00:13:47.835 ] 00:13:47.835 }' 00:13:47.835 13:23:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:47.835 13:23:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:47.835 13:23:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:47.835 13:23:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:47.835 13:23:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:47.835 13:23:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:13:47.835 13:23:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:47.835 13:23:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:13:47.835 13:23:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:47.835 13:23:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:13:47.835 13:23:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:47.835 13:23:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:47.835 13:23:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.835 13:23:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.835 [2024-11-17 13:23:36.967185] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:47.835 [2024-11-17 13:23:36.967444] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:13:47.835 [2024-11-17 13:23:36.967465] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:47.836 request: 00:13:47.836 { 00:13:47.836 "base_bdev": "BaseBdev1", 00:13:47.836 "raid_bdev": "raid_bdev1", 00:13:47.836 "method": "bdev_raid_add_base_bdev", 00:13:47.836 "req_id": 1 00:13:47.836 } 00:13:47.836 Got JSON-RPC error response 00:13:47.836 response: 00:13:47.836 { 00:13:47.836 "code": -22, 00:13:47.836 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:13:47.836 } 00:13:47.836 13:23:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:13:47.836 13:23:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:13:47.836 13:23:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:47.836 13:23:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:47.836 13:23:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:47.836 13:23:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:13:48.774 13:23:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:48.774 13:23:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:48.774 13:23:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:48.774 13:23:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:48.774 13:23:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:48.774 13:23:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:48.774 13:23:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:48.774 13:23:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:48.774 13:23:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:48.774 13:23:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:48.774 13:23:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:48.774 13:23:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:48.774 13:23:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.774 13:23:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:49.034 13:23:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.034 13:23:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:49.034 "name": "raid_bdev1", 00:13:49.034 "uuid": "70459e7f-0563-4896-b84d-2982f338816f", 00:13:49.034 "strip_size_kb": 0, 00:13:49.034 "state": "online", 00:13:49.034 "raid_level": "raid1", 00:13:49.034 "superblock": true, 00:13:49.034 "num_base_bdevs": 4, 00:13:49.034 "num_base_bdevs_discovered": 2, 00:13:49.034 "num_base_bdevs_operational": 2, 00:13:49.034 "base_bdevs_list": [ 00:13:49.034 { 00:13:49.034 "name": null, 00:13:49.034 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:49.034 "is_configured": false, 00:13:49.034 "data_offset": 0, 00:13:49.034 "data_size": 63488 00:13:49.034 }, 00:13:49.034 { 00:13:49.034 "name": null, 00:13:49.034 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:49.034 "is_configured": false, 00:13:49.034 "data_offset": 2048, 00:13:49.034 "data_size": 63488 00:13:49.034 }, 00:13:49.034 { 00:13:49.034 "name": "BaseBdev3", 00:13:49.034 "uuid": "8b9c3a12-c695-5848-81b3-b37e080671d7", 00:13:49.034 "is_configured": true, 00:13:49.034 "data_offset": 2048, 00:13:49.034 "data_size": 63488 00:13:49.034 }, 00:13:49.034 { 00:13:49.034 "name": "BaseBdev4", 00:13:49.034 "uuid": "fc6589e2-26ba-5560-9638-ea6ad185db73", 00:13:49.034 "is_configured": true, 00:13:49.034 "data_offset": 2048, 00:13:49.034 "data_size": 63488 00:13:49.034 } 00:13:49.034 ] 00:13:49.034 }' 00:13:49.034 13:23:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:49.034 13:23:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:49.294 13:23:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:49.294 13:23:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:49.294 13:23:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:49.294 13:23:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:49.294 13:23:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:49.294 13:23:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:49.294 13:23:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:49.294 13:23:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.294 13:23:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:49.294 13:23:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.294 13:23:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:49.294 "name": "raid_bdev1", 00:13:49.294 "uuid": "70459e7f-0563-4896-b84d-2982f338816f", 00:13:49.294 "strip_size_kb": 0, 00:13:49.294 "state": "online", 00:13:49.294 "raid_level": "raid1", 00:13:49.294 "superblock": true, 00:13:49.294 "num_base_bdevs": 4, 00:13:49.294 "num_base_bdevs_discovered": 2, 00:13:49.294 "num_base_bdevs_operational": 2, 00:13:49.294 "base_bdevs_list": [ 00:13:49.294 { 00:13:49.294 "name": null, 00:13:49.294 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:49.294 "is_configured": false, 00:13:49.294 "data_offset": 0, 00:13:49.294 "data_size": 63488 00:13:49.294 }, 00:13:49.294 { 00:13:49.294 "name": null, 00:13:49.294 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:49.294 "is_configured": false, 00:13:49.294 "data_offset": 2048, 00:13:49.294 "data_size": 63488 00:13:49.294 }, 00:13:49.294 { 00:13:49.294 "name": "BaseBdev3", 00:13:49.294 "uuid": "8b9c3a12-c695-5848-81b3-b37e080671d7", 00:13:49.294 "is_configured": true, 00:13:49.294 "data_offset": 2048, 00:13:49.294 "data_size": 63488 00:13:49.294 }, 00:13:49.294 { 00:13:49.294 "name": "BaseBdev4", 00:13:49.294 "uuid": "fc6589e2-26ba-5560-9638-ea6ad185db73", 00:13:49.294 "is_configured": true, 00:13:49.294 "data_offset": 2048, 00:13:49.294 "data_size": 63488 00:13:49.294 } 00:13:49.294 ] 00:13:49.294 }' 00:13:49.294 13:23:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:49.554 13:23:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:49.554 13:23:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:49.554 13:23:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:49.554 13:23:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 77887 00:13:49.554 13:23:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 77887 ']' 00:13:49.554 13:23:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 77887 00:13:49.554 13:23:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:13:49.554 13:23:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:49.554 13:23:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77887 00:13:49.554 13:23:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:49.555 13:23:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:49.555 13:23:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77887' 00:13:49.555 killing process with pid 77887 00:13:49.555 13:23:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 77887 00:13:49.555 Received shutdown signal, test time was about 60.000000 seconds 00:13:49.555 00:13:49.555 Latency(us) 00:13:49.555 [2024-11-17T13:23:38.779Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:49.555 [2024-11-17T13:23:38.779Z] =================================================================================================================== 00:13:49.555 [2024-11-17T13:23:38.779Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:49.555 [2024-11-17 13:23:38.613951] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:49.555 [2024-11-17 13:23:38.614075] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:49.555 13:23:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 77887 00:13:49.555 [2024-11-17 13:23:38.614148] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:49.555 [2024-11-17 13:23:38.614158] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:13:50.129 [2024-11-17 13:23:39.096296] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:51.067 13:23:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:13:51.067 00:13:51.067 real 0m25.008s 00:13:51.067 user 0m30.013s 00:13:51.067 sys 0m3.805s 00:13:51.067 13:23:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:51.067 13:23:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.067 ************************************ 00:13:51.067 END TEST raid_rebuild_test_sb 00:13:51.067 ************************************ 00:13:51.067 13:23:40 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true true 00:13:51.067 13:23:40 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:13:51.067 13:23:40 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:51.067 13:23:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:51.067 ************************************ 00:13:51.067 START TEST raid_rebuild_test_io 00:13:51.067 ************************************ 00:13:51.067 13:23:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 false true true 00:13:51.067 13:23:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:51.067 13:23:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:13:51.067 13:23:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:13:51.067 13:23:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:13:51.067 13:23:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:51.067 13:23:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:51.067 13:23:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:51.067 13:23:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:51.067 13:23:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:51.067 13:23:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:51.067 13:23:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:51.067 13:23:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:51.067 13:23:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:51.067 13:23:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:13:51.067 13:23:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:51.068 13:23:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:51.068 13:23:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:13:51.068 13:23:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:51.068 13:23:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:51.068 13:23:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:51.068 13:23:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:51.068 13:23:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:51.068 13:23:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:51.068 13:23:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:51.068 13:23:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:51.068 13:23:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:51.068 13:23:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:51.068 13:23:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:51.068 13:23:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:13:51.328 13:23:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=78635 00:13:51.328 13:23:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:51.328 13:23:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 78635 00:13:51.328 13:23:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # '[' -z 78635 ']' 00:13:51.328 13:23:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:51.328 13:23:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:51.328 13:23:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:51.328 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:51.328 13:23:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:51.328 13:23:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:51.328 [2024-11-17 13:23:40.374578] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:13:51.328 [2024-11-17 13:23:40.374776] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:13:51.328 Zero copy mechanism will not be used. 00:13:51.328 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78635 ] 00:13:51.328 [2024-11-17 13:23:40.547035] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:51.588 [2024-11-17 13:23:40.658850] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:51.847 [2024-11-17 13:23:40.849550] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:51.847 [2024-11-17 13:23:40.849652] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:52.108 13:23:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:52.108 13:23:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # return 0 00:13:52.108 13:23:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:52.108 13:23:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:52.108 13:23:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.108 13:23:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:52.108 BaseBdev1_malloc 00:13:52.108 13:23:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.108 13:23:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:52.108 13:23:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.108 13:23:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:52.108 [2024-11-17 13:23:41.272154] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:52.108 [2024-11-17 13:23:41.272249] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:52.108 [2024-11-17 13:23:41.272278] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:52.108 [2024-11-17 13:23:41.272292] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:52.108 [2024-11-17 13:23:41.274741] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:52.108 [2024-11-17 13:23:41.274783] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:52.108 BaseBdev1 00:13:52.108 13:23:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.108 13:23:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:52.108 13:23:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:52.108 13:23:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.108 13:23:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:52.108 BaseBdev2_malloc 00:13:52.108 13:23:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.108 13:23:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:52.108 13:23:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.108 13:23:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:52.369 [2024-11-17 13:23:41.334443] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:52.369 [2024-11-17 13:23:41.334507] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:52.370 [2024-11-17 13:23:41.334527] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:52.370 [2024-11-17 13:23:41.334540] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:52.370 [2024-11-17 13:23:41.336924] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:52.370 [2024-11-17 13:23:41.337030] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:52.370 BaseBdev2 00:13:52.370 13:23:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.370 13:23:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:52.370 13:23:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:52.370 13:23:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.370 13:23:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:52.370 BaseBdev3_malloc 00:13:52.370 13:23:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.370 13:23:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:13:52.370 13:23:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.370 13:23:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:52.370 [2024-11-17 13:23:41.409353] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:13:52.370 [2024-11-17 13:23:41.409402] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:52.370 [2024-11-17 13:23:41.409425] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:52.370 [2024-11-17 13:23:41.409437] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:52.370 [2024-11-17 13:23:41.411797] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:52.370 [2024-11-17 13:23:41.411838] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:52.370 BaseBdev3 00:13:52.370 13:23:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.370 13:23:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:52.370 13:23:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:52.370 13:23:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.370 13:23:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:52.370 BaseBdev4_malloc 00:13:52.370 13:23:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.370 13:23:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:13:52.370 13:23:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.370 13:23:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:52.370 [2024-11-17 13:23:41.470880] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:13:52.370 [2024-11-17 13:23:41.470932] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:52.370 [2024-11-17 13:23:41.470951] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:13:52.370 [2024-11-17 13:23:41.470963] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:52.370 [2024-11-17 13:23:41.473276] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:52.370 [2024-11-17 13:23:41.473313] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:52.370 BaseBdev4 00:13:52.370 13:23:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.370 13:23:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:52.370 13:23:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.370 13:23:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:52.370 spare_malloc 00:13:52.370 13:23:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.370 13:23:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:52.370 13:23:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.370 13:23:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:52.370 spare_delay 00:13:52.370 13:23:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.370 13:23:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:52.370 13:23:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.370 13:23:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:52.370 [2024-11-17 13:23:41.543585] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:52.370 [2024-11-17 13:23:41.543639] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:52.370 [2024-11-17 13:23:41.543659] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:13:52.370 [2024-11-17 13:23:41.543670] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:52.370 [2024-11-17 13:23:41.545982] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:52.370 [2024-11-17 13:23:41.546018] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:52.370 spare 00:13:52.370 13:23:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.370 13:23:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:13:52.370 13:23:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.370 13:23:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:52.370 [2024-11-17 13:23:41.555624] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:52.370 [2024-11-17 13:23:41.557668] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:52.370 [2024-11-17 13:23:41.557737] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:52.370 [2024-11-17 13:23:41.557798] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:52.370 [2024-11-17 13:23:41.557873] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:52.370 [2024-11-17 13:23:41.557886] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:13:52.370 [2024-11-17 13:23:41.558128] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:13:52.370 [2024-11-17 13:23:41.558334] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:52.370 [2024-11-17 13:23:41.558349] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:52.370 [2024-11-17 13:23:41.558503] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:52.370 13:23:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.370 13:23:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:13:52.370 13:23:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:52.370 13:23:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:52.370 13:23:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:52.370 13:23:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:52.370 13:23:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:52.370 13:23:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:52.370 13:23:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:52.370 13:23:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:52.370 13:23:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:52.370 13:23:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:52.370 13:23:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:52.370 13:23:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.370 13:23:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:52.370 13:23:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.630 13:23:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:52.630 "name": "raid_bdev1", 00:13:52.630 "uuid": "0953f716-4edd-405d-a940-1816b1a1c233", 00:13:52.630 "strip_size_kb": 0, 00:13:52.630 "state": "online", 00:13:52.630 "raid_level": "raid1", 00:13:52.630 "superblock": false, 00:13:52.630 "num_base_bdevs": 4, 00:13:52.630 "num_base_bdevs_discovered": 4, 00:13:52.630 "num_base_bdevs_operational": 4, 00:13:52.630 "base_bdevs_list": [ 00:13:52.630 { 00:13:52.630 "name": "BaseBdev1", 00:13:52.630 "uuid": "e4518869-2440-5748-a378-86fec3b6a55b", 00:13:52.630 "is_configured": true, 00:13:52.630 "data_offset": 0, 00:13:52.630 "data_size": 65536 00:13:52.630 }, 00:13:52.630 { 00:13:52.630 "name": "BaseBdev2", 00:13:52.630 "uuid": "17115bae-7506-55a3-b085-98bf23c0457e", 00:13:52.630 "is_configured": true, 00:13:52.630 "data_offset": 0, 00:13:52.630 "data_size": 65536 00:13:52.630 }, 00:13:52.630 { 00:13:52.630 "name": "BaseBdev3", 00:13:52.630 "uuid": "e4abdc23-3f2a-5151-ab4d-dfe6c77be488", 00:13:52.630 "is_configured": true, 00:13:52.630 "data_offset": 0, 00:13:52.630 "data_size": 65536 00:13:52.630 }, 00:13:52.630 { 00:13:52.631 "name": "BaseBdev4", 00:13:52.631 "uuid": "87056400-4417-52a6-b263-0e6134aabd32", 00:13:52.631 "is_configured": true, 00:13:52.631 "data_offset": 0, 00:13:52.631 "data_size": 65536 00:13:52.631 } 00:13:52.631 ] 00:13:52.631 }' 00:13:52.631 13:23:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:52.631 13:23:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:52.891 13:23:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:52.891 13:23:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.891 13:23:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:52.891 13:23:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:52.891 [2024-11-17 13:23:42.039189] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:52.891 13:23:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.891 13:23:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:13:52.891 13:23:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:52.891 13:23:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.891 13:23:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:52.891 13:23:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:52.891 13:23:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.149 13:23:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:13:53.149 13:23:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:13:53.149 13:23:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:53.149 13:23:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:53.149 13:23:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.149 13:23:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:53.149 [2024-11-17 13:23:42.138613] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:53.149 13:23:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.149 13:23:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:53.149 13:23:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:53.149 13:23:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:53.150 13:23:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:53.150 13:23:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:53.150 13:23:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:53.150 13:23:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:53.150 13:23:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:53.150 13:23:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:53.150 13:23:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:53.150 13:23:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:53.150 13:23:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:53.150 13:23:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.150 13:23:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:53.150 13:23:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.150 13:23:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:53.150 "name": "raid_bdev1", 00:13:53.150 "uuid": "0953f716-4edd-405d-a940-1816b1a1c233", 00:13:53.150 "strip_size_kb": 0, 00:13:53.150 "state": "online", 00:13:53.150 "raid_level": "raid1", 00:13:53.150 "superblock": false, 00:13:53.150 "num_base_bdevs": 4, 00:13:53.150 "num_base_bdevs_discovered": 3, 00:13:53.150 "num_base_bdevs_operational": 3, 00:13:53.150 "base_bdevs_list": [ 00:13:53.150 { 00:13:53.150 "name": null, 00:13:53.150 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:53.150 "is_configured": false, 00:13:53.150 "data_offset": 0, 00:13:53.150 "data_size": 65536 00:13:53.150 }, 00:13:53.150 { 00:13:53.150 "name": "BaseBdev2", 00:13:53.150 "uuid": "17115bae-7506-55a3-b085-98bf23c0457e", 00:13:53.150 "is_configured": true, 00:13:53.150 "data_offset": 0, 00:13:53.150 "data_size": 65536 00:13:53.150 }, 00:13:53.150 { 00:13:53.150 "name": "BaseBdev3", 00:13:53.150 "uuid": "e4abdc23-3f2a-5151-ab4d-dfe6c77be488", 00:13:53.150 "is_configured": true, 00:13:53.150 "data_offset": 0, 00:13:53.150 "data_size": 65536 00:13:53.150 }, 00:13:53.150 { 00:13:53.150 "name": "BaseBdev4", 00:13:53.150 "uuid": "87056400-4417-52a6-b263-0e6134aabd32", 00:13:53.150 "is_configured": true, 00:13:53.150 "data_offset": 0, 00:13:53.150 "data_size": 65536 00:13:53.150 } 00:13:53.150 ] 00:13:53.150 }' 00:13:53.150 13:23:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:53.150 13:23:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:53.150 [2024-11-17 13:23:42.216082] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:13:53.150 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:53.150 Zero copy mechanism will not be used. 00:13:53.150 Running I/O for 60 seconds... 00:13:53.409 13:23:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:53.409 13:23:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.409 13:23:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:53.409 [2024-11-17 13:23:42.554856] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:53.409 13:23:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.409 13:23:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:53.669 [2024-11-17 13:23:42.653625] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:13:53.669 [2024-11-17 13:23:42.655979] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:53.669 [2024-11-17 13:23:42.782559] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:53.669 [2024-11-17 13:23:42.784858] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:53.930 [2024-11-17 13:23:43.001972] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:53.930 [2024-11-17 13:23:43.002334] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:54.190 144.00 IOPS, 432.00 MiB/s [2024-11-17T13:23:43.414Z] [2024-11-17 13:23:43.332440] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:54.449 [2024-11-17 13:23:43.569303] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:54.449 [2024-11-17 13:23:43.570748] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:54.449 13:23:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:54.449 13:23:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:54.449 13:23:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:54.449 13:23:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:54.449 13:23:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:54.449 13:23:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:54.449 13:23:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:54.449 13:23:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.449 13:23:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:54.449 13:23:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.449 13:23:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:54.449 "name": "raid_bdev1", 00:13:54.449 "uuid": "0953f716-4edd-405d-a940-1816b1a1c233", 00:13:54.449 "strip_size_kb": 0, 00:13:54.449 "state": "online", 00:13:54.449 "raid_level": "raid1", 00:13:54.449 "superblock": false, 00:13:54.449 "num_base_bdevs": 4, 00:13:54.449 "num_base_bdevs_discovered": 4, 00:13:54.449 "num_base_bdevs_operational": 4, 00:13:54.449 "process": { 00:13:54.449 "type": "rebuild", 00:13:54.449 "target": "spare", 00:13:54.449 "progress": { 00:13:54.449 "blocks": 10240, 00:13:54.449 "percent": 15 00:13:54.449 } 00:13:54.449 }, 00:13:54.449 "base_bdevs_list": [ 00:13:54.449 { 00:13:54.449 "name": "spare", 00:13:54.449 "uuid": "97fd2207-633d-58ef-96cc-2bc1a61f77f9", 00:13:54.449 "is_configured": true, 00:13:54.449 "data_offset": 0, 00:13:54.449 "data_size": 65536 00:13:54.449 }, 00:13:54.449 { 00:13:54.449 "name": "BaseBdev2", 00:13:54.449 "uuid": "17115bae-7506-55a3-b085-98bf23c0457e", 00:13:54.449 "is_configured": true, 00:13:54.449 "data_offset": 0, 00:13:54.449 "data_size": 65536 00:13:54.449 }, 00:13:54.449 { 00:13:54.449 "name": "BaseBdev3", 00:13:54.449 "uuid": "e4abdc23-3f2a-5151-ab4d-dfe6c77be488", 00:13:54.449 "is_configured": true, 00:13:54.449 "data_offset": 0, 00:13:54.449 "data_size": 65536 00:13:54.449 }, 00:13:54.449 { 00:13:54.449 "name": "BaseBdev4", 00:13:54.449 "uuid": "87056400-4417-52a6-b263-0e6134aabd32", 00:13:54.449 "is_configured": true, 00:13:54.449 "data_offset": 0, 00:13:54.449 "data_size": 65536 00:13:54.449 } 00:13:54.449 ] 00:13:54.449 }' 00:13:54.449 13:23:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:54.708 13:23:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:54.708 13:23:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:54.708 13:23:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:54.708 13:23:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:54.708 13:23:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.708 13:23:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:54.708 [2024-11-17 13:23:43.759012] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:54.708 [2024-11-17 13:23:43.869036] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:54.708 [2024-11-17 13:23:43.879258] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:54.708 [2024-11-17 13:23:43.879352] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:54.708 [2024-11-17 13:23:43.879379] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:54.708 [2024-11-17 13:23:43.910009] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:13:54.708 13:23:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.708 13:23:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:54.708 13:23:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:54.708 13:23:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:54.709 13:23:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:54.709 13:23:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:54.709 13:23:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:54.709 13:23:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:54.709 13:23:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:54.709 13:23:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:54.709 13:23:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:54.709 13:23:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:54.709 13:23:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.709 13:23:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:54.969 13:23:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:54.969 13:23:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.969 13:23:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:54.969 "name": "raid_bdev1", 00:13:54.969 "uuid": "0953f716-4edd-405d-a940-1816b1a1c233", 00:13:54.969 "strip_size_kb": 0, 00:13:54.969 "state": "online", 00:13:54.969 "raid_level": "raid1", 00:13:54.969 "superblock": false, 00:13:54.969 "num_base_bdevs": 4, 00:13:54.969 "num_base_bdevs_discovered": 3, 00:13:54.969 "num_base_bdevs_operational": 3, 00:13:54.969 "base_bdevs_list": [ 00:13:54.969 { 00:13:54.969 "name": null, 00:13:54.969 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:54.969 "is_configured": false, 00:13:54.969 "data_offset": 0, 00:13:54.969 "data_size": 65536 00:13:54.969 }, 00:13:54.969 { 00:13:54.969 "name": "BaseBdev2", 00:13:54.969 "uuid": "17115bae-7506-55a3-b085-98bf23c0457e", 00:13:54.969 "is_configured": true, 00:13:54.969 "data_offset": 0, 00:13:54.969 "data_size": 65536 00:13:54.969 }, 00:13:54.969 { 00:13:54.969 "name": "BaseBdev3", 00:13:54.969 "uuid": "e4abdc23-3f2a-5151-ab4d-dfe6c77be488", 00:13:54.969 "is_configured": true, 00:13:54.969 "data_offset": 0, 00:13:54.969 "data_size": 65536 00:13:54.969 }, 00:13:54.969 { 00:13:54.969 "name": "BaseBdev4", 00:13:54.969 "uuid": "87056400-4417-52a6-b263-0e6134aabd32", 00:13:54.969 "is_configured": true, 00:13:54.969 "data_offset": 0, 00:13:54.969 "data_size": 65536 00:13:54.969 } 00:13:54.969 ] 00:13:54.969 }' 00:13:54.969 13:23:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:54.969 13:23:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:55.229 137.00 IOPS, 411.00 MiB/s [2024-11-17T13:23:44.453Z] 13:23:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:55.229 13:23:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:55.229 13:23:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:55.229 13:23:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:55.229 13:23:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:55.229 13:23:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:55.229 13:23:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:55.229 13:23:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.229 13:23:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:55.229 13:23:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.229 13:23:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:55.229 "name": "raid_bdev1", 00:13:55.229 "uuid": "0953f716-4edd-405d-a940-1816b1a1c233", 00:13:55.229 "strip_size_kb": 0, 00:13:55.229 "state": "online", 00:13:55.229 "raid_level": "raid1", 00:13:55.229 "superblock": false, 00:13:55.229 "num_base_bdevs": 4, 00:13:55.229 "num_base_bdevs_discovered": 3, 00:13:55.229 "num_base_bdevs_operational": 3, 00:13:55.229 "base_bdevs_list": [ 00:13:55.229 { 00:13:55.229 "name": null, 00:13:55.229 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:55.229 "is_configured": false, 00:13:55.229 "data_offset": 0, 00:13:55.229 "data_size": 65536 00:13:55.229 }, 00:13:55.229 { 00:13:55.229 "name": "BaseBdev2", 00:13:55.229 "uuid": "17115bae-7506-55a3-b085-98bf23c0457e", 00:13:55.229 "is_configured": true, 00:13:55.229 "data_offset": 0, 00:13:55.229 "data_size": 65536 00:13:55.229 }, 00:13:55.229 { 00:13:55.229 "name": "BaseBdev3", 00:13:55.229 "uuid": "e4abdc23-3f2a-5151-ab4d-dfe6c77be488", 00:13:55.229 "is_configured": true, 00:13:55.229 "data_offset": 0, 00:13:55.229 "data_size": 65536 00:13:55.229 }, 00:13:55.229 { 00:13:55.229 "name": "BaseBdev4", 00:13:55.229 "uuid": "87056400-4417-52a6-b263-0e6134aabd32", 00:13:55.229 "is_configured": true, 00:13:55.229 "data_offset": 0, 00:13:55.229 "data_size": 65536 00:13:55.229 } 00:13:55.229 ] 00:13:55.229 }' 00:13:55.229 13:23:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:55.229 13:23:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:55.229 13:23:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:55.489 13:23:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:55.489 13:23:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:55.489 13:23:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.489 13:23:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:55.489 [2024-11-17 13:23:44.503477] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:55.489 13:23:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.489 13:23:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:55.489 [2024-11-17 13:23:44.562482] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:13:55.489 [2024-11-17 13:23:44.564715] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:55.748 [2024-11-17 13:23:44.714276] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:55.748 [2024-11-17 13:23:44.858917] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:55.748 [2024-11-17 13:23:44.860164] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:56.008 [2024-11-17 13:23:45.220190] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:56.267 149.67 IOPS, 449.00 MiB/s [2024-11-17T13:23:45.491Z] [2024-11-17 13:23:45.334555] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:56.267 [2024-11-17 13:23:45.335774] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:56.527 13:23:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:56.527 13:23:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:56.527 13:23:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:56.527 13:23:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:56.527 13:23:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:56.527 13:23:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:56.527 13:23:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:56.527 13:23:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.527 13:23:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:56.527 13:23:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.527 13:23:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:56.527 "name": "raid_bdev1", 00:13:56.527 "uuid": "0953f716-4edd-405d-a940-1816b1a1c233", 00:13:56.527 "strip_size_kb": 0, 00:13:56.527 "state": "online", 00:13:56.527 "raid_level": "raid1", 00:13:56.527 "superblock": false, 00:13:56.527 "num_base_bdevs": 4, 00:13:56.527 "num_base_bdevs_discovered": 4, 00:13:56.527 "num_base_bdevs_operational": 4, 00:13:56.527 "process": { 00:13:56.527 "type": "rebuild", 00:13:56.527 "target": "spare", 00:13:56.527 "progress": { 00:13:56.527 "blocks": 10240, 00:13:56.527 "percent": 15 00:13:56.527 } 00:13:56.527 }, 00:13:56.527 "base_bdevs_list": [ 00:13:56.527 { 00:13:56.527 "name": "spare", 00:13:56.527 "uuid": "97fd2207-633d-58ef-96cc-2bc1a61f77f9", 00:13:56.527 "is_configured": true, 00:13:56.527 "data_offset": 0, 00:13:56.527 "data_size": 65536 00:13:56.527 }, 00:13:56.527 { 00:13:56.527 "name": "BaseBdev2", 00:13:56.527 "uuid": "17115bae-7506-55a3-b085-98bf23c0457e", 00:13:56.527 "is_configured": true, 00:13:56.527 "data_offset": 0, 00:13:56.527 "data_size": 65536 00:13:56.527 }, 00:13:56.527 { 00:13:56.527 "name": "BaseBdev3", 00:13:56.527 "uuid": "e4abdc23-3f2a-5151-ab4d-dfe6c77be488", 00:13:56.527 "is_configured": true, 00:13:56.527 "data_offset": 0, 00:13:56.527 "data_size": 65536 00:13:56.527 }, 00:13:56.527 { 00:13:56.527 "name": "BaseBdev4", 00:13:56.527 "uuid": "87056400-4417-52a6-b263-0e6134aabd32", 00:13:56.527 "is_configured": true, 00:13:56.527 "data_offset": 0, 00:13:56.527 "data_size": 65536 00:13:56.527 } 00:13:56.527 ] 00:13:56.527 }' 00:13:56.527 13:23:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:56.527 13:23:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:56.527 13:23:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:56.527 13:23:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:56.527 13:23:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:13:56.527 13:23:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:13:56.527 13:23:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:56.527 13:23:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:13:56.527 13:23:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:56.527 13:23:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.527 13:23:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:56.527 [2024-11-17 13:23:45.672774] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:56.527 [2024-11-17 13:23:45.721525] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:13:56.527 [2024-11-17 13:23:45.721563] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:13:56.527 13:23:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.527 13:23:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:13:56.527 13:23:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:13:56.527 13:23:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:56.527 13:23:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:56.527 13:23:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:56.527 13:23:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:56.527 13:23:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:56.527 13:23:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:56.527 13:23:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:56.527 13:23:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.527 13:23:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:56.787 13:23:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.787 13:23:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:56.787 "name": "raid_bdev1", 00:13:56.787 "uuid": "0953f716-4edd-405d-a940-1816b1a1c233", 00:13:56.787 "strip_size_kb": 0, 00:13:56.787 "state": "online", 00:13:56.787 "raid_level": "raid1", 00:13:56.787 "superblock": false, 00:13:56.787 "num_base_bdevs": 4, 00:13:56.787 "num_base_bdevs_discovered": 3, 00:13:56.787 "num_base_bdevs_operational": 3, 00:13:56.787 "process": { 00:13:56.787 "type": "rebuild", 00:13:56.787 "target": "spare", 00:13:56.787 "progress": { 00:13:56.787 "blocks": 14336, 00:13:56.787 "percent": 21 00:13:56.787 } 00:13:56.787 }, 00:13:56.787 "base_bdevs_list": [ 00:13:56.787 { 00:13:56.787 "name": "spare", 00:13:56.787 "uuid": "97fd2207-633d-58ef-96cc-2bc1a61f77f9", 00:13:56.787 "is_configured": true, 00:13:56.787 "data_offset": 0, 00:13:56.787 "data_size": 65536 00:13:56.787 }, 00:13:56.787 { 00:13:56.787 "name": null, 00:13:56.787 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:56.787 "is_configured": false, 00:13:56.787 "data_offset": 0, 00:13:56.787 "data_size": 65536 00:13:56.787 }, 00:13:56.787 { 00:13:56.787 "name": "BaseBdev3", 00:13:56.787 "uuid": "e4abdc23-3f2a-5151-ab4d-dfe6c77be488", 00:13:56.787 "is_configured": true, 00:13:56.787 "data_offset": 0, 00:13:56.787 "data_size": 65536 00:13:56.787 }, 00:13:56.787 { 00:13:56.787 "name": "BaseBdev4", 00:13:56.787 "uuid": "87056400-4417-52a6-b263-0e6134aabd32", 00:13:56.787 "is_configured": true, 00:13:56.787 "data_offset": 0, 00:13:56.787 "data_size": 65536 00:13:56.787 } 00:13:56.787 ] 00:13:56.787 }' 00:13:56.787 13:23:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:56.788 13:23:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:56.788 13:23:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:56.788 [2024-11-17 13:23:45.864224] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:13:56.788 [2024-11-17 13:23:45.864580] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:13:56.788 13:23:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:56.788 13:23:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=475 00:13:56.788 13:23:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:56.788 13:23:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:56.788 13:23:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:56.788 13:23:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:56.788 13:23:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:56.788 13:23:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:56.788 13:23:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:56.788 13:23:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:56.788 13:23:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.788 13:23:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:56.788 13:23:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.788 13:23:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:56.788 "name": "raid_bdev1", 00:13:56.788 "uuid": "0953f716-4edd-405d-a940-1816b1a1c233", 00:13:56.788 "strip_size_kb": 0, 00:13:56.788 "state": "online", 00:13:56.788 "raid_level": "raid1", 00:13:56.788 "superblock": false, 00:13:56.788 "num_base_bdevs": 4, 00:13:56.788 "num_base_bdevs_discovered": 3, 00:13:56.788 "num_base_bdevs_operational": 3, 00:13:56.788 "process": { 00:13:56.788 "type": "rebuild", 00:13:56.788 "target": "spare", 00:13:56.788 "progress": { 00:13:56.788 "blocks": 16384, 00:13:56.788 "percent": 25 00:13:56.788 } 00:13:56.788 }, 00:13:56.788 "base_bdevs_list": [ 00:13:56.788 { 00:13:56.788 "name": "spare", 00:13:56.788 "uuid": "97fd2207-633d-58ef-96cc-2bc1a61f77f9", 00:13:56.788 "is_configured": true, 00:13:56.788 "data_offset": 0, 00:13:56.788 "data_size": 65536 00:13:56.788 }, 00:13:56.788 { 00:13:56.788 "name": null, 00:13:56.788 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:56.788 "is_configured": false, 00:13:56.788 "data_offset": 0, 00:13:56.788 "data_size": 65536 00:13:56.788 }, 00:13:56.788 { 00:13:56.788 "name": "BaseBdev3", 00:13:56.788 "uuid": "e4abdc23-3f2a-5151-ab4d-dfe6c77be488", 00:13:56.788 "is_configured": true, 00:13:56.788 "data_offset": 0, 00:13:56.788 "data_size": 65536 00:13:56.788 }, 00:13:56.788 { 00:13:56.788 "name": "BaseBdev4", 00:13:56.788 "uuid": "87056400-4417-52a6-b263-0e6134aabd32", 00:13:56.788 "is_configured": true, 00:13:56.788 "data_offset": 0, 00:13:56.788 "data_size": 65536 00:13:56.788 } 00:13:56.788 ] 00:13:56.788 }' 00:13:56.788 13:23:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:56.788 13:23:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:56.788 13:23:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:56.788 13:23:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:56.788 13:23:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:57.047 [2024-11-17 13:23:46.088278] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:13:57.047 [2024-11-17 13:23:46.196550] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:13:57.047 [2024-11-17 13:23:46.197310] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:13:57.309 131.25 IOPS, 393.75 MiB/s [2024-11-17T13:23:46.533Z] [2024-11-17 13:23:46.532864] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:13:57.569 [2024-11-17 13:23:46.534367] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:13:57.569 [2024-11-17 13:23:46.755588] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:13:57.569 [2024-11-17 13:23:46.756476] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:13:57.828 13:23:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:57.828 13:23:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:57.828 13:23:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:57.828 13:23:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:57.828 13:23:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:57.828 13:23:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:57.828 13:23:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:57.828 13:23:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:57.828 13:23:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.828 13:23:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:57.828 13:23:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.828 13:23:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:57.828 "name": "raid_bdev1", 00:13:57.828 "uuid": "0953f716-4edd-405d-a940-1816b1a1c233", 00:13:57.828 "strip_size_kb": 0, 00:13:57.828 "state": "online", 00:13:57.828 "raid_level": "raid1", 00:13:57.828 "superblock": false, 00:13:57.828 "num_base_bdevs": 4, 00:13:57.828 "num_base_bdevs_discovered": 3, 00:13:57.828 "num_base_bdevs_operational": 3, 00:13:57.828 "process": { 00:13:57.828 "type": "rebuild", 00:13:57.828 "target": "spare", 00:13:57.828 "progress": { 00:13:57.828 "blocks": 30720, 00:13:57.828 "percent": 46 00:13:57.828 } 00:13:57.828 }, 00:13:57.828 "base_bdevs_list": [ 00:13:57.828 { 00:13:57.829 "name": "spare", 00:13:57.829 "uuid": "97fd2207-633d-58ef-96cc-2bc1a61f77f9", 00:13:57.829 "is_configured": true, 00:13:57.829 "data_offset": 0, 00:13:57.829 "data_size": 65536 00:13:57.829 }, 00:13:57.829 { 00:13:57.829 "name": null, 00:13:57.829 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:57.829 "is_configured": false, 00:13:57.829 "data_offset": 0, 00:13:57.829 "data_size": 65536 00:13:57.829 }, 00:13:57.829 { 00:13:57.829 "name": "BaseBdev3", 00:13:57.829 "uuid": "e4abdc23-3f2a-5151-ab4d-dfe6c77be488", 00:13:57.829 "is_configured": true, 00:13:57.829 "data_offset": 0, 00:13:57.829 "data_size": 65536 00:13:57.829 }, 00:13:57.829 { 00:13:57.829 "name": "BaseBdev4", 00:13:57.829 "uuid": "87056400-4417-52a6-b263-0e6134aabd32", 00:13:57.829 "is_configured": true, 00:13:57.829 "data_offset": 0, 00:13:57.829 "data_size": 65536 00:13:57.829 } 00:13:57.829 ] 00:13:57.829 }' 00:13:57.829 13:23:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:58.089 13:23:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:58.089 13:23:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:58.089 [2024-11-17 13:23:47.136855] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:13:58.089 13:23:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:58.089 13:23:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:58.658 111.40 IOPS, 334.20 MiB/s [2024-11-17T13:23:47.882Z] [2024-11-17 13:23:47.689656] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:13:58.918 [2024-11-17 13:23:48.025754] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:13:59.178 [2024-11-17 13:23:48.144248] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:13:59.178 13:23:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:59.178 13:23:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:59.178 13:23:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:59.178 13:23:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:59.178 13:23:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:59.178 13:23:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:59.178 13:23:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:59.178 13:23:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:59.178 13:23:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.178 13:23:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:59.178 13:23:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.178 13:23:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:59.178 "name": "raid_bdev1", 00:13:59.178 "uuid": "0953f716-4edd-405d-a940-1816b1a1c233", 00:13:59.178 "strip_size_kb": 0, 00:13:59.178 "state": "online", 00:13:59.178 "raid_level": "raid1", 00:13:59.178 "superblock": false, 00:13:59.178 "num_base_bdevs": 4, 00:13:59.178 "num_base_bdevs_discovered": 3, 00:13:59.178 "num_base_bdevs_operational": 3, 00:13:59.178 "process": { 00:13:59.178 "type": "rebuild", 00:13:59.178 "target": "spare", 00:13:59.178 "progress": { 00:13:59.178 "blocks": 47104, 00:13:59.178 "percent": 71 00:13:59.178 } 00:13:59.178 }, 00:13:59.178 "base_bdevs_list": [ 00:13:59.178 { 00:13:59.178 "name": "spare", 00:13:59.178 "uuid": "97fd2207-633d-58ef-96cc-2bc1a61f77f9", 00:13:59.178 "is_configured": true, 00:13:59.178 "data_offset": 0, 00:13:59.178 "data_size": 65536 00:13:59.178 }, 00:13:59.178 { 00:13:59.178 "name": null, 00:13:59.178 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:59.178 "is_configured": false, 00:13:59.178 "data_offset": 0, 00:13:59.178 "data_size": 65536 00:13:59.178 }, 00:13:59.178 { 00:13:59.178 "name": "BaseBdev3", 00:13:59.178 "uuid": "e4abdc23-3f2a-5151-ab4d-dfe6c77be488", 00:13:59.178 "is_configured": true, 00:13:59.178 "data_offset": 0, 00:13:59.178 "data_size": 65536 00:13:59.178 }, 00:13:59.178 { 00:13:59.178 "name": "BaseBdev4", 00:13:59.178 "uuid": "87056400-4417-52a6-b263-0e6134aabd32", 00:13:59.178 "is_configured": true, 00:13:59.178 "data_offset": 0, 00:13:59.178 "data_size": 65536 00:13:59.178 } 00:13:59.178 ] 00:13:59.178 }' 00:13:59.178 13:23:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:59.178 99.00 IOPS, 297.00 MiB/s [2024-11-17T13:23:48.402Z] 13:23:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:59.178 13:23:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:59.178 13:23:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:59.178 13:23:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:59.747 [2024-11-17 13:23:48.909250] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:14:00.266 90.43 IOPS, 271.29 MiB/s [2024-11-17T13:23:49.490Z] [2024-11-17 13:23:49.244093] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:00.266 13:23:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:00.266 13:23:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:00.266 13:23:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:00.266 13:23:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:00.266 13:23:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:00.266 13:23:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:00.266 13:23:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:00.266 13:23:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:00.266 13:23:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.266 13:23:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:00.266 13:23:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.266 13:23:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:00.266 "name": "raid_bdev1", 00:14:00.266 "uuid": "0953f716-4edd-405d-a940-1816b1a1c233", 00:14:00.266 "strip_size_kb": 0, 00:14:00.266 "state": "online", 00:14:00.266 "raid_level": "raid1", 00:14:00.266 "superblock": false, 00:14:00.266 "num_base_bdevs": 4, 00:14:00.266 "num_base_bdevs_discovered": 3, 00:14:00.266 "num_base_bdevs_operational": 3, 00:14:00.266 "process": { 00:14:00.266 "type": "rebuild", 00:14:00.266 "target": "spare", 00:14:00.266 "progress": { 00:14:00.266 "blocks": 65536, 00:14:00.266 "percent": 100 00:14:00.266 } 00:14:00.266 }, 00:14:00.266 "base_bdevs_list": [ 00:14:00.266 { 00:14:00.266 "name": "spare", 00:14:00.266 "uuid": "97fd2207-633d-58ef-96cc-2bc1a61f77f9", 00:14:00.266 "is_configured": true, 00:14:00.266 "data_offset": 0, 00:14:00.266 "data_size": 65536 00:14:00.266 }, 00:14:00.266 { 00:14:00.266 "name": null, 00:14:00.266 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:00.266 "is_configured": false, 00:14:00.266 "data_offset": 0, 00:14:00.266 "data_size": 65536 00:14:00.266 }, 00:14:00.266 { 00:14:00.266 "name": "BaseBdev3", 00:14:00.266 "uuid": "e4abdc23-3f2a-5151-ab4d-dfe6c77be488", 00:14:00.266 "is_configured": true, 00:14:00.266 "data_offset": 0, 00:14:00.266 "data_size": 65536 00:14:00.266 }, 00:14:00.266 { 00:14:00.266 "name": "BaseBdev4", 00:14:00.266 "uuid": "87056400-4417-52a6-b263-0e6134aabd32", 00:14:00.266 "is_configured": true, 00:14:00.266 "data_offset": 0, 00:14:00.266 "data_size": 65536 00:14:00.266 } 00:14:00.266 ] 00:14:00.266 }' 00:14:00.266 13:23:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:00.266 [2024-11-17 13:23:49.343967] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:00.266 [2024-11-17 13:23:49.354192] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:00.266 13:23:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:00.266 13:23:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:00.266 13:23:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:00.266 13:23:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:01.466 84.00 IOPS, 252.00 MiB/s [2024-11-17T13:23:50.690Z] 13:23:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:01.466 13:23:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:01.466 13:23:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:01.466 13:23:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:01.466 13:23:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:01.466 13:23:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:01.466 13:23:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:01.466 13:23:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:01.466 13:23:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.466 13:23:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:01.466 13:23:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.466 13:23:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:01.466 "name": "raid_bdev1", 00:14:01.466 "uuid": "0953f716-4edd-405d-a940-1816b1a1c233", 00:14:01.466 "strip_size_kb": 0, 00:14:01.466 "state": "online", 00:14:01.466 "raid_level": "raid1", 00:14:01.466 "superblock": false, 00:14:01.466 "num_base_bdevs": 4, 00:14:01.466 "num_base_bdevs_discovered": 3, 00:14:01.466 "num_base_bdevs_operational": 3, 00:14:01.466 "base_bdevs_list": [ 00:14:01.466 { 00:14:01.466 "name": "spare", 00:14:01.466 "uuid": "97fd2207-633d-58ef-96cc-2bc1a61f77f9", 00:14:01.466 "is_configured": true, 00:14:01.466 "data_offset": 0, 00:14:01.466 "data_size": 65536 00:14:01.466 }, 00:14:01.466 { 00:14:01.466 "name": null, 00:14:01.466 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:01.466 "is_configured": false, 00:14:01.466 "data_offset": 0, 00:14:01.466 "data_size": 65536 00:14:01.466 }, 00:14:01.466 { 00:14:01.466 "name": "BaseBdev3", 00:14:01.466 "uuid": "e4abdc23-3f2a-5151-ab4d-dfe6c77be488", 00:14:01.466 "is_configured": true, 00:14:01.466 "data_offset": 0, 00:14:01.466 "data_size": 65536 00:14:01.466 }, 00:14:01.466 { 00:14:01.466 "name": "BaseBdev4", 00:14:01.466 "uuid": "87056400-4417-52a6-b263-0e6134aabd32", 00:14:01.466 "is_configured": true, 00:14:01.466 "data_offset": 0, 00:14:01.466 "data_size": 65536 00:14:01.466 } 00:14:01.466 ] 00:14:01.466 }' 00:14:01.466 13:23:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:01.466 13:23:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:01.466 13:23:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:01.466 13:23:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:01.466 13:23:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:14:01.466 13:23:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:01.466 13:23:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:01.466 13:23:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:01.466 13:23:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:01.466 13:23:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:01.466 13:23:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:01.466 13:23:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:01.466 13:23:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.466 13:23:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:01.466 13:23:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.466 13:23:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:01.466 "name": "raid_bdev1", 00:14:01.466 "uuid": "0953f716-4edd-405d-a940-1816b1a1c233", 00:14:01.466 "strip_size_kb": 0, 00:14:01.466 "state": "online", 00:14:01.466 "raid_level": "raid1", 00:14:01.466 "superblock": false, 00:14:01.466 "num_base_bdevs": 4, 00:14:01.466 "num_base_bdevs_discovered": 3, 00:14:01.466 "num_base_bdevs_operational": 3, 00:14:01.466 "base_bdevs_list": [ 00:14:01.466 { 00:14:01.466 "name": "spare", 00:14:01.466 "uuid": "97fd2207-633d-58ef-96cc-2bc1a61f77f9", 00:14:01.466 "is_configured": true, 00:14:01.466 "data_offset": 0, 00:14:01.466 "data_size": 65536 00:14:01.466 }, 00:14:01.466 { 00:14:01.466 "name": null, 00:14:01.466 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:01.466 "is_configured": false, 00:14:01.466 "data_offset": 0, 00:14:01.466 "data_size": 65536 00:14:01.466 }, 00:14:01.466 { 00:14:01.466 "name": "BaseBdev3", 00:14:01.466 "uuid": "e4abdc23-3f2a-5151-ab4d-dfe6c77be488", 00:14:01.466 "is_configured": true, 00:14:01.466 "data_offset": 0, 00:14:01.466 "data_size": 65536 00:14:01.466 }, 00:14:01.466 { 00:14:01.466 "name": "BaseBdev4", 00:14:01.466 "uuid": "87056400-4417-52a6-b263-0e6134aabd32", 00:14:01.466 "is_configured": true, 00:14:01.466 "data_offset": 0, 00:14:01.466 "data_size": 65536 00:14:01.466 } 00:14:01.466 ] 00:14:01.466 }' 00:14:01.466 13:23:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:01.466 13:23:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:01.466 13:23:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:01.727 13:23:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:01.727 13:23:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:01.727 13:23:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:01.727 13:23:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:01.727 13:23:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:01.727 13:23:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:01.727 13:23:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:01.727 13:23:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:01.727 13:23:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:01.727 13:23:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:01.727 13:23:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:01.727 13:23:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:01.727 13:23:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.727 13:23:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:01.727 13:23:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:01.727 13:23:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.727 13:23:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:01.727 "name": "raid_bdev1", 00:14:01.727 "uuid": "0953f716-4edd-405d-a940-1816b1a1c233", 00:14:01.727 "strip_size_kb": 0, 00:14:01.727 "state": "online", 00:14:01.727 "raid_level": "raid1", 00:14:01.727 "superblock": false, 00:14:01.727 "num_base_bdevs": 4, 00:14:01.727 "num_base_bdevs_discovered": 3, 00:14:01.727 "num_base_bdevs_operational": 3, 00:14:01.727 "base_bdevs_list": [ 00:14:01.727 { 00:14:01.727 "name": "spare", 00:14:01.727 "uuid": "97fd2207-633d-58ef-96cc-2bc1a61f77f9", 00:14:01.727 "is_configured": true, 00:14:01.727 "data_offset": 0, 00:14:01.727 "data_size": 65536 00:14:01.727 }, 00:14:01.727 { 00:14:01.727 "name": null, 00:14:01.727 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:01.727 "is_configured": false, 00:14:01.727 "data_offset": 0, 00:14:01.727 "data_size": 65536 00:14:01.727 }, 00:14:01.727 { 00:14:01.727 "name": "BaseBdev3", 00:14:01.727 "uuid": "e4abdc23-3f2a-5151-ab4d-dfe6c77be488", 00:14:01.727 "is_configured": true, 00:14:01.727 "data_offset": 0, 00:14:01.727 "data_size": 65536 00:14:01.727 }, 00:14:01.727 { 00:14:01.727 "name": "BaseBdev4", 00:14:01.727 "uuid": "87056400-4417-52a6-b263-0e6134aabd32", 00:14:01.727 "is_configured": true, 00:14:01.727 "data_offset": 0, 00:14:01.727 "data_size": 65536 00:14:01.727 } 00:14:01.727 ] 00:14:01.727 }' 00:14:01.727 13:23:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:01.727 13:23:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:01.987 13:23:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:01.987 13:23:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.987 13:23:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:01.987 [2024-11-17 13:23:51.111463] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:01.987 [2024-11-17 13:23:51.111584] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:02.247 00:14:02.247 Latency(us) 00:14:02.247 [2024-11-17T13:23:51.471Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:02.247 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:14:02.247 raid_bdev1 : 9.01 78.69 236.07 0.00 0.00 19560.66 361.31 119968.08 00:14:02.247 [2024-11-17T13:23:51.471Z] =================================================================================================================== 00:14:02.247 [2024-11-17T13:23:51.471Z] Total : 78.69 236.07 0.00 0.00 19560.66 361.31 119968.08 00:14:02.247 [2024-11-17 13:23:51.230952] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:02.247 [2024-11-17 13:23:51.231032] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:02.247 [2024-11-17 13:23:51.231189] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:02.247 [2024-11-17 13:23:51.231280] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:02.247 { 00:14:02.247 "results": [ 00:14:02.247 { 00:14:02.247 "job": "raid_bdev1", 00:14:02.247 "core_mask": "0x1", 00:14:02.247 "workload": "randrw", 00:14:02.247 "percentage": 50, 00:14:02.247 "status": "finished", 00:14:02.247 "queue_depth": 2, 00:14:02.247 "io_size": 3145728, 00:14:02.247 "runtime": 9.010046, 00:14:02.247 "iops": 78.68994231549983, 00:14:02.247 "mibps": 236.06982694649952, 00:14:02.247 "io_failed": 0, 00:14:02.247 "io_timeout": 0, 00:14:02.247 "avg_latency_us": 19560.658512820195, 00:14:02.247 "min_latency_us": 361.3065502183406, 00:14:02.247 "max_latency_us": 119968.08384279476 00:14:02.247 } 00:14:02.247 ], 00:14:02.247 "core_count": 1 00:14:02.247 } 00:14:02.247 13:23:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.247 13:23:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:02.247 13:23:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.247 13:23:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:02.247 13:23:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:14:02.247 13:23:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.247 13:23:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:02.247 13:23:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:02.247 13:23:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:14:02.247 13:23:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:14:02.247 13:23:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:02.247 13:23:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:14:02.247 13:23:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:02.247 13:23:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:02.247 13:23:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:02.247 13:23:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:14:02.247 13:23:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:02.247 13:23:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:02.247 13:23:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:14:02.508 /dev/nbd0 00:14:02.508 13:23:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:02.508 13:23:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:02.508 13:23:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:02.508 13:23:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:14:02.508 13:23:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:02.508 13:23:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:02.508 13:23:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:02.508 13:23:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:14:02.508 13:23:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:02.508 13:23:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:02.508 13:23:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:02.508 1+0 records in 00:14:02.508 1+0 records out 00:14:02.508 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000480151 s, 8.5 MB/s 00:14:02.508 13:23:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:02.508 13:23:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:14:02.508 13:23:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:02.508 13:23:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:02.508 13:23:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:14:02.508 13:23:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:02.508 13:23:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:02.508 13:23:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:02.508 13:23:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:14:02.508 13:23:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@728 -- # continue 00:14:02.508 13:23:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:02.508 13:23:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:14:02.508 13:23:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:14:02.508 13:23:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:02.508 13:23:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:14:02.508 13:23:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:02.508 13:23:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:14:02.508 13:23:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:02.508 13:23:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:14:02.508 13:23:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:02.508 13:23:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:02.508 13:23:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:14:02.768 /dev/nbd1 00:14:02.768 13:23:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:02.768 13:23:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:02.768 13:23:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:02.768 13:23:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:14:02.768 13:23:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:02.768 13:23:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:02.768 13:23:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:02.768 13:23:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:14:02.768 13:23:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:02.768 13:23:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:02.768 13:23:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:02.768 1+0 records in 00:14:02.768 1+0 records out 00:14:02.768 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000477365 s, 8.6 MB/s 00:14:02.768 13:23:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:02.768 13:23:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:14:02.768 13:23:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:02.768 13:23:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:02.768 13:23:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:14:02.768 13:23:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:02.768 13:23:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:02.768 13:23:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:14:02.768 13:23:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:14:02.768 13:23:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:02.768 13:23:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:14:02.768 13:23:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:02.768 13:23:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:14:02.768 13:23:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:02.768 13:23:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:03.030 13:23:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:03.030 13:23:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:03.030 13:23:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:03.030 13:23:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:03.030 13:23:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:03.030 13:23:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:03.030 13:23:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:14:03.030 13:23:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:03.030 13:23:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:03.030 13:23:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:14:03.030 13:23:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:14:03.030 13:23:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:03.030 13:23:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:14:03.030 13:23:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:03.030 13:23:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:14:03.030 13:23:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:03.030 13:23:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:14:03.030 13:23:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:03.030 13:23:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:03.030 13:23:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:14:03.290 /dev/nbd1 00:14:03.290 13:23:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:03.290 13:23:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:03.290 13:23:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:03.290 13:23:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:14:03.290 13:23:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:03.290 13:23:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:03.290 13:23:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:03.290 13:23:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:14:03.290 13:23:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:03.290 13:23:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:03.290 13:23:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:03.290 1+0 records in 00:14:03.290 1+0 records out 00:14:03.290 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000301565 s, 13.6 MB/s 00:14:03.290 13:23:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:03.290 13:23:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:14:03.290 13:23:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:03.290 13:23:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:03.290 13:23:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:14:03.290 13:23:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:03.290 13:23:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:03.290 13:23:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:14:03.290 13:23:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:14:03.290 13:23:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:03.290 13:23:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:14:03.290 13:23:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:03.290 13:23:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:14:03.290 13:23:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:03.290 13:23:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:03.550 13:23:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:03.550 13:23:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:03.550 13:23:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:03.550 13:23:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:03.550 13:23:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:03.550 13:23:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:03.550 13:23:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:14:03.550 13:23:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:03.550 13:23:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:03.550 13:23:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:03.550 13:23:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:03.550 13:23:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:03.550 13:23:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:14:03.550 13:23:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:03.550 13:23:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:03.809 13:23:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:03.809 13:23:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:03.809 13:23:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:03.809 13:23:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:03.809 13:23:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:03.809 13:23:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:03.809 13:23:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:14:03.809 13:23:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:03.809 13:23:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:14:03.809 13:23:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 78635 00:14:03.809 13:23:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # '[' -z 78635 ']' 00:14:03.809 13:23:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # kill -0 78635 00:14:03.809 13:23:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # uname 00:14:03.809 13:23:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:03.809 13:23:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78635 00:14:03.809 13:23:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:03.809 13:23:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:03.809 13:23:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78635' 00:14:03.809 killing process with pid 78635 00:14:03.809 Received shutdown signal, test time was about 10.791066 seconds 00:14:03.809 00:14:03.809 Latency(us) 00:14:03.809 [2024-11-17T13:23:53.033Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:03.809 [2024-11-17T13:23:53.033Z] =================================================================================================================== 00:14:03.809 [2024-11-17T13:23:53.033Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:03.809 13:23:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@973 -- # kill 78635 00:14:03.809 [2024-11-17 13:23:52.988761] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:03.809 13:23:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@978 -- # wait 78635 00:14:04.379 [2024-11-17 13:23:53.393426] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:05.319 13:23:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:14:05.319 00:14:05.319 real 0m14.239s 00:14:05.319 user 0m17.614s 00:14:05.319 sys 0m1.881s 00:14:05.319 13:23:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:05.319 13:23:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:05.319 ************************************ 00:14:05.319 END TEST raid_rebuild_test_io 00:14:05.319 ************************************ 00:14:05.579 13:23:54 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true true 00:14:05.579 13:23:54 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:14:05.579 13:23:54 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:05.579 13:23:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:05.579 ************************************ 00:14:05.579 START TEST raid_rebuild_test_sb_io 00:14:05.579 ************************************ 00:14:05.579 13:23:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 true true true 00:14:05.579 13:23:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:14:05.579 13:23:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:14:05.579 13:23:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:14:05.579 13:23:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:14:05.579 13:23:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:05.579 13:23:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:05.579 13:23:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:05.579 13:23:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:05.579 13:23:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:05.579 13:23:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:05.579 13:23:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:05.579 13:23:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:05.579 13:23:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:05.579 13:23:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:14:05.579 13:23:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:05.579 13:23:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:05.579 13:23:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:14:05.579 13:23:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:05.579 13:23:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:05.579 13:23:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:05.579 13:23:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:05.579 13:23:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:05.579 13:23:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:05.579 13:23:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:05.579 13:23:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:05.579 13:23:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:05.579 13:23:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:14:05.579 13:23:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:14:05.579 13:23:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:14:05.579 13:23:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:14:05.579 13:23:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=79065 00:14:05.579 13:23:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 79065 00:14:05.579 13:23:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:05.579 13:23:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # '[' -z 79065 ']' 00:14:05.579 13:23:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:05.579 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:05.579 13:23:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:05.579 13:23:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:05.579 13:23:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:05.579 13:23:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:05.579 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:05.579 Zero copy mechanism will not be used. 00:14:05.579 [2024-11-17 13:23:54.689722] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:14:05.579 [2024-11-17 13:23:54.689820] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79065 ] 00:14:05.840 [2024-11-17 13:23:54.863491] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:05.840 [2024-11-17 13:23:54.975252] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:06.100 [2024-11-17 13:23:55.177307] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:06.100 [2024-11-17 13:23:55.177344] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:06.360 13:23:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:06.360 13:23:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # return 0 00:14:06.360 13:23:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:06.360 13:23:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:06.360 13:23:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.360 13:23:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:06.360 BaseBdev1_malloc 00:14:06.360 13:23:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.360 13:23:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:06.360 13:23:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.360 13:23:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:06.360 [2024-11-17 13:23:55.558894] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:06.360 [2024-11-17 13:23:55.558966] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:06.360 [2024-11-17 13:23:55.558991] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:06.360 [2024-11-17 13:23:55.559002] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:06.360 [2024-11-17 13:23:55.561094] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:06.360 [2024-11-17 13:23:55.561132] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:06.360 BaseBdev1 00:14:06.360 13:23:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.360 13:23:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:06.360 13:23:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:06.360 13:23:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.360 13:23:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:06.621 BaseBdev2_malloc 00:14:06.621 13:23:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.621 13:23:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:06.621 13:23:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.621 13:23:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:06.621 [2024-11-17 13:23:55.616249] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:06.621 [2024-11-17 13:23:55.616304] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:06.621 [2024-11-17 13:23:55.616338] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:06.621 [2024-11-17 13:23:55.616351] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:06.621 [2024-11-17 13:23:55.618401] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:06.621 [2024-11-17 13:23:55.618449] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:06.621 BaseBdev2 00:14:06.621 13:23:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.621 13:23:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:06.621 13:23:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:06.621 13:23:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.621 13:23:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:06.621 BaseBdev3_malloc 00:14:06.621 13:23:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.621 13:23:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:14:06.621 13:23:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.621 13:23:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:06.621 [2024-11-17 13:23:55.682473] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:14:06.621 [2024-11-17 13:23:55.682582] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:06.621 [2024-11-17 13:23:55.682611] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:06.621 [2024-11-17 13:23:55.682623] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:06.621 [2024-11-17 13:23:55.684691] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:06.621 [2024-11-17 13:23:55.684729] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:06.621 BaseBdev3 00:14:06.621 13:23:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.621 13:23:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:06.621 13:23:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:06.621 13:23:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.621 13:23:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:06.621 BaseBdev4_malloc 00:14:06.621 13:23:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.621 13:23:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:14:06.621 13:23:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.621 13:23:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:06.621 [2024-11-17 13:23:55.738271] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:14:06.621 [2024-11-17 13:23:55.738320] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:06.621 [2024-11-17 13:23:55.738337] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:06.621 [2024-11-17 13:23:55.738348] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:06.621 [2024-11-17 13:23:55.740339] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:06.621 [2024-11-17 13:23:55.740379] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:06.621 BaseBdev4 00:14:06.621 13:23:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.621 13:23:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:06.621 13:23:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.621 13:23:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:06.621 spare_malloc 00:14:06.621 13:23:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.621 13:23:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:06.621 13:23:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.621 13:23:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:06.621 spare_delay 00:14:06.621 13:23:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.621 13:23:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:06.621 13:23:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.621 13:23:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:06.621 [2024-11-17 13:23:55.805723] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:06.621 [2024-11-17 13:23:55.805840] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:06.621 [2024-11-17 13:23:55.805863] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:14:06.621 [2024-11-17 13:23:55.805874] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:06.621 [2024-11-17 13:23:55.807935] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:06.621 [2024-11-17 13:23:55.807973] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:06.621 spare 00:14:06.621 13:23:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.621 13:23:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:14:06.621 13:23:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.621 13:23:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:06.621 [2024-11-17 13:23:55.817764] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:06.621 [2024-11-17 13:23:55.819591] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:06.621 [2024-11-17 13:23:55.819659] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:06.621 [2024-11-17 13:23:55.819709] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:06.621 [2024-11-17 13:23:55.819879] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:06.621 [2024-11-17 13:23:55.819895] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:06.621 [2024-11-17 13:23:55.820129] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:06.621 [2024-11-17 13:23:55.820324] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:06.621 [2024-11-17 13:23:55.820335] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:06.621 [2024-11-17 13:23:55.820486] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:06.621 13:23:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.621 13:23:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:14:06.621 13:23:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:06.621 13:23:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:06.621 13:23:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:06.621 13:23:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:06.621 13:23:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:06.621 13:23:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:06.621 13:23:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:06.621 13:23:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:06.621 13:23:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:06.622 13:23:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.622 13:23:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.622 13:23:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:06.622 13:23:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:06.622 13:23:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.881 13:23:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:06.881 "name": "raid_bdev1", 00:14:06.881 "uuid": "56607d2c-fac0-4727-b82e-ceb6c238f74a", 00:14:06.881 "strip_size_kb": 0, 00:14:06.881 "state": "online", 00:14:06.881 "raid_level": "raid1", 00:14:06.881 "superblock": true, 00:14:06.881 "num_base_bdevs": 4, 00:14:06.881 "num_base_bdevs_discovered": 4, 00:14:06.881 "num_base_bdevs_operational": 4, 00:14:06.881 "base_bdevs_list": [ 00:14:06.881 { 00:14:06.881 "name": "BaseBdev1", 00:14:06.881 "uuid": "24c487d0-0e1e-5168-beed-22f763643b3f", 00:14:06.881 "is_configured": true, 00:14:06.881 "data_offset": 2048, 00:14:06.881 "data_size": 63488 00:14:06.881 }, 00:14:06.881 { 00:14:06.881 "name": "BaseBdev2", 00:14:06.881 "uuid": "0f902097-3953-5f12-9b4a-ad54a757c7bb", 00:14:06.881 "is_configured": true, 00:14:06.881 "data_offset": 2048, 00:14:06.881 "data_size": 63488 00:14:06.881 }, 00:14:06.881 { 00:14:06.881 "name": "BaseBdev3", 00:14:06.881 "uuid": "15aa295d-186a-5fdf-82c1-63cd2b22186d", 00:14:06.881 "is_configured": true, 00:14:06.881 "data_offset": 2048, 00:14:06.881 "data_size": 63488 00:14:06.881 }, 00:14:06.881 { 00:14:06.881 "name": "BaseBdev4", 00:14:06.881 "uuid": "5e70a0d7-dc23-57c0-bb42-031bd391bc6d", 00:14:06.881 "is_configured": true, 00:14:06.881 "data_offset": 2048, 00:14:06.881 "data_size": 63488 00:14:06.881 } 00:14:06.881 ] 00:14:06.881 }' 00:14:06.881 13:23:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:06.881 13:23:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:07.142 13:23:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:07.142 13:23:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.142 13:23:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:07.142 13:23:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:07.142 [2024-11-17 13:23:56.253302] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:07.142 13:23:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.142 13:23:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:14:07.142 13:23:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:07.142 13:23:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:07.142 13:23:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.142 13:23:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:07.142 13:23:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.142 13:23:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:14:07.142 13:23:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:14:07.142 13:23:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:07.142 13:23:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:07.142 13:23:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.142 13:23:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:07.142 [2024-11-17 13:23:56.352786] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:07.142 13:23:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.142 13:23:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:07.142 13:23:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:07.142 13:23:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:07.142 13:23:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:07.142 13:23:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:07.142 13:23:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:07.142 13:23:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:07.142 13:23:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:07.142 13:23:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:07.142 13:23:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:07.142 13:23:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:07.142 13:23:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:07.142 13:23:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.403 13:23:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:07.403 13:23:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.403 13:23:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:07.403 "name": "raid_bdev1", 00:14:07.403 "uuid": "56607d2c-fac0-4727-b82e-ceb6c238f74a", 00:14:07.403 "strip_size_kb": 0, 00:14:07.403 "state": "online", 00:14:07.403 "raid_level": "raid1", 00:14:07.403 "superblock": true, 00:14:07.403 "num_base_bdevs": 4, 00:14:07.403 "num_base_bdevs_discovered": 3, 00:14:07.403 "num_base_bdevs_operational": 3, 00:14:07.403 "base_bdevs_list": [ 00:14:07.403 { 00:14:07.403 "name": null, 00:14:07.403 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:07.403 "is_configured": false, 00:14:07.403 "data_offset": 0, 00:14:07.403 "data_size": 63488 00:14:07.403 }, 00:14:07.403 { 00:14:07.403 "name": "BaseBdev2", 00:14:07.403 "uuid": "0f902097-3953-5f12-9b4a-ad54a757c7bb", 00:14:07.403 "is_configured": true, 00:14:07.403 "data_offset": 2048, 00:14:07.403 "data_size": 63488 00:14:07.403 }, 00:14:07.403 { 00:14:07.403 "name": "BaseBdev3", 00:14:07.403 "uuid": "15aa295d-186a-5fdf-82c1-63cd2b22186d", 00:14:07.403 "is_configured": true, 00:14:07.403 "data_offset": 2048, 00:14:07.403 "data_size": 63488 00:14:07.403 }, 00:14:07.403 { 00:14:07.403 "name": "BaseBdev4", 00:14:07.403 "uuid": "5e70a0d7-dc23-57c0-bb42-031bd391bc6d", 00:14:07.403 "is_configured": true, 00:14:07.403 "data_offset": 2048, 00:14:07.403 "data_size": 63488 00:14:07.403 } 00:14:07.403 ] 00:14:07.403 }' 00:14:07.403 13:23:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:07.403 13:23:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:07.403 [2024-11-17 13:23:56.448590] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:07.403 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:07.403 Zero copy mechanism will not be used. 00:14:07.403 Running I/O for 60 seconds... 00:14:07.663 13:23:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:07.663 13:23:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.663 13:23:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:07.663 [2024-11-17 13:23:56.801099] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:07.663 13:23:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.663 13:23:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:07.663 [2024-11-17 13:23:56.842701] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:14:07.663 [2024-11-17 13:23:56.844608] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:07.924 [2024-11-17 13:23:56.960486] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:07.924 [2024-11-17 13:23:56.962080] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:08.184 [2024-11-17 13:23:57.179816] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:08.184 [2024-11-17 13:23:57.180284] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:08.444 [2024-11-17 13:23:57.441158] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:08.444 [2024-11-17 13:23:57.442685] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:08.703 196.00 IOPS, 588.00 MiB/s [2024-11-17T13:23:57.927Z] [2024-11-17 13:23:57.697788] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:08.703 13:23:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:08.703 13:23:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:08.703 13:23:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:08.703 13:23:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:08.703 13:23:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:08.703 13:23:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:08.703 13:23:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:08.703 13:23:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.703 13:23:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:08.703 13:23:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.703 13:23:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:08.703 "name": "raid_bdev1", 00:14:08.703 "uuid": "56607d2c-fac0-4727-b82e-ceb6c238f74a", 00:14:08.703 "strip_size_kb": 0, 00:14:08.703 "state": "online", 00:14:08.703 "raid_level": "raid1", 00:14:08.703 "superblock": true, 00:14:08.703 "num_base_bdevs": 4, 00:14:08.703 "num_base_bdevs_discovered": 4, 00:14:08.703 "num_base_bdevs_operational": 4, 00:14:08.703 "process": { 00:14:08.703 "type": "rebuild", 00:14:08.703 "target": "spare", 00:14:08.703 "progress": { 00:14:08.703 "blocks": 10240, 00:14:08.703 "percent": 16 00:14:08.703 } 00:14:08.703 }, 00:14:08.703 "base_bdevs_list": [ 00:14:08.703 { 00:14:08.703 "name": "spare", 00:14:08.703 "uuid": "4990a7af-a5d2-5929-80c8-0e44ef6c4e58", 00:14:08.703 "is_configured": true, 00:14:08.703 "data_offset": 2048, 00:14:08.703 "data_size": 63488 00:14:08.703 }, 00:14:08.703 { 00:14:08.703 "name": "BaseBdev2", 00:14:08.703 "uuid": "0f902097-3953-5f12-9b4a-ad54a757c7bb", 00:14:08.703 "is_configured": true, 00:14:08.703 "data_offset": 2048, 00:14:08.703 "data_size": 63488 00:14:08.703 }, 00:14:08.703 { 00:14:08.703 "name": "BaseBdev3", 00:14:08.703 "uuid": "15aa295d-186a-5fdf-82c1-63cd2b22186d", 00:14:08.703 "is_configured": true, 00:14:08.703 "data_offset": 2048, 00:14:08.703 "data_size": 63488 00:14:08.703 }, 00:14:08.703 { 00:14:08.703 "name": "BaseBdev4", 00:14:08.703 "uuid": "5e70a0d7-dc23-57c0-bb42-031bd391bc6d", 00:14:08.703 "is_configured": true, 00:14:08.703 "data_offset": 2048, 00:14:08.703 "data_size": 63488 00:14:08.703 } 00:14:08.703 ] 00:14:08.703 }' 00:14:08.703 13:23:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:08.703 13:23:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:08.703 13:23:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:08.962 13:23:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:08.962 13:23:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:08.962 13:23:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.962 13:23:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:08.962 [2024-11-17 13:23:57.979664] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:08.962 [2024-11-17 13:23:58.049165] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:08.963 [2024-11-17 13:23:58.050801] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:08.963 [2024-11-17 13:23:58.066547] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:08.963 [2024-11-17 13:23:58.066592] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:08.963 [2024-11-17 13:23:58.066609] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:08.963 [2024-11-17 13:23:58.089391] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:14:08.963 13:23:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.963 13:23:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:08.963 13:23:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:08.963 13:23:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:08.963 13:23:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:08.963 13:23:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:08.963 13:23:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:08.963 13:23:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:08.963 13:23:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:08.963 13:23:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:08.963 13:23:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:08.963 13:23:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:08.963 13:23:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.963 13:23:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:08.963 13:23:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:08.963 13:23:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.963 13:23:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:08.963 "name": "raid_bdev1", 00:14:08.963 "uuid": "56607d2c-fac0-4727-b82e-ceb6c238f74a", 00:14:08.963 "strip_size_kb": 0, 00:14:08.963 "state": "online", 00:14:08.963 "raid_level": "raid1", 00:14:08.963 "superblock": true, 00:14:08.963 "num_base_bdevs": 4, 00:14:08.963 "num_base_bdevs_discovered": 3, 00:14:08.963 "num_base_bdevs_operational": 3, 00:14:08.963 "base_bdevs_list": [ 00:14:08.963 { 00:14:08.963 "name": null, 00:14:08.963 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:08.963 "is_configured": false, 00:14:08.963 "data_offset": 0, 00:14:08.963 "data_size": 63488 00:14:08.963 }, 00:14:08.963 { 00:14:08.963 "name": "BaseBdev2", 00:14:08.963 "uuid": "0f902097-3953-5f12-9b4a-ad54a757c7bb", 00:14:08.963 "is_configured": true, 00:14:08.963 "data_offset": 2048, 00:14:08.963 "data_size": 63488 00:14:08.963 }, 00:14:08.963 { 00:14:08.963 "name": "BaseBdev3", 00:14:08.963 "uuid": "15aa295d-186a-5fdf-82c1-63cd2b22186d", 00:14:08.963 "is_configured": true, 00:14:08.963 "data_offset": 2048, 00:14:08.963 "data_size": 63488 00:14:08.963 }, 00:14:08.963 { 00:14:08.963 "name": "BaseBdev4", 00:14:08.963 "uuid": "5e70a0d7-dc23-57c0-bb42-031bd391bc6d", 00:14:08.963 "is_configured": true, 00:14:08.963 "data_offset": 2048, 00:14:08.963 "data_size": 63488 00:14:08.963 } 00:14:08.963 ] 00:14:08.963 }' 00:14:08.963 13:23:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:08.963 13:23:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:09.533 172.50 IOPS, 517.50 MiB/s [2024-11-17T13:23:58.757Z] 13:23:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:09.533 13:23:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:09.533 13:23:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:09.533 13:23:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:09.533 13:23:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:09.533 13:23:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:09.533 13:23:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:09.533 13:23:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.533 13:23:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:09.533 13:23:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.533 13:23:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:09.533 "name": "raid_bdev1", 00:14:09.533 "uuid": "56607d2c-fac0-4727-b82e-ceb6c238f74a", 00:14:09.533 "strip_size_kb": 0, 00:14:09.533 "state": "online", 00:14:09.533 "raid_level": "raid1", 00:14:09.533 "superblock": true, 00:14:09.533 "num_base_bdevs": 4, 00:14:09.533 "num_base_bdevs_discovered": 3, 00:14:09.533 "num_base_bdevs_operational": 3, 00:14:09.533 "base_bdevs_list": [ 00:14:09.533 { 00:14:09.533 "name": null, 00:14:09.533 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:09.533 "is_configured": false, 00:14:09.533 "data_offset": 0, 00:14:09.533 "data_size": 63488 00:14:09.533 }, 00:14:09.533 { 00:14:09.533 "name": "BaseBdev2", 00:14:09.533 "uuid": "0f902097-3953-5f12-9b4a-ad54a757c7bb", 00:14:09.533 "is_configured": true, 00:14:09.533 "data_offset": 2048, 00:14:09.533 "data_size": 63488 00:14:09.533 }, 00:14:09.533 { 00:14:09.533 "name": "BaseBdev3", 00:14:09.533 "uuid": "15aa295d-186a-5fdf-82c1-63cd2b22186d", 00:14:09.533 "is_configured": true, 00:14:09.533 "data_offset": 2048, 00:14:09.533 "data_size": 63488 00:14:09.533 }, 00:14:09.533 { 00:14:09.533 "name": "BaseBdev4", 00:14:09.533 "uuid": "5e70a0d7-dc23-57c0-bb42-031bd391bc6d", 00:14:09.533 "is_configured": true, 00:14:09.533 "data_offset": 2048, 00:14:09.533 "data_size": 63488 00:14:09.533 } 00:14:09.533 ] 00:14:09.533 }' 00:14:09.533 13:23:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:09.533 13:23:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:09.533 13:23:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:09.533 13:23:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:09.533 13:23:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:09.533 13:23:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.533 13:23:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:09.533 [2024-11-17 13:23:58.703208] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:09.533 13:23:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.533 13:23:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:09.793 [2024-11-17 13:23:58.758306] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:14:09.793 [2024-11-17 13:23:58.760485] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:09.793 [2024-11-17 13:23:58.882482] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:10.053 [2024-11-17 13:23:59.098971] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:10.053 [2024-11-17 13:23:59.099723] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:10.312 164.67 IOPS, 494.00 MiB/s [2024-11-17T13:23:59.536Z] [2024-11-17 13:23:59.454631] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:10.572 13:23:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:10.572 13:23:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:10.572 13:23:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:10.572 13:23:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:10.572 13:23:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:10.572 13:23:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:10.572 13:23:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.572 13:23:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:10.572 13:23:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:10.572 13:23:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.572 13:23:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:10.572 "name": "raid_bdev1", 00:14:10.572 "uuid": "56607d2c-fac0-4727-b82e-ceb6c238f74a", 00:14:10.572 "strip_size_kb": 0, 00:14:10.572 "state": "online", 00:14:10.572 "raid_level": "raid1", 00:14:10.572 "superblock": true, 00:14:10.572 "num_base_bdevs": 4, 00:14:10.572 "num_base_bdevs_discovered": 4, 00:14:10.572 "num_base_bdevs_operational": 4, 00:14:10.572 "process": { 00:14:10.572 "type": "rebuild", 00:14:10.572 "target": "spare", 00:14:10.572 "progress": { 00:14:10.572 "blocks": 10240, 00:14:10.572 "percent": 16 00:14:10.572 } 00:14:10.572 }, 00:14:10.572 "base_bdevs_list": [ 00:14:10.572 { 00:14:10.572 "name": "spare", 00:14:10.572 "uuid": "4990a7af-a5d2-5929-80c8-0e44ef6c4e58", 00:14:10.572 "is_configured": true, 00:14:10.572 "data_offset": 2048, 00:14:10.572 "data_size": 63488 00:14:10.572 }, 00:14:10.572 { 00:14:10.572 "name": "BaseBdev2", 00:14:10.572 "uuid": "0f902097-3953-5f12-9b4a-ad54a757c7bb", 00:14:10.572 "is_configured": true, 00:14:10.572 "data_offset": 2048, 00:14:10.572 "data_size": 63488 00:14:10.572 }, 00:14:10.572 { 00:14:10.572 "name": "BaseBdev3", 00:14:10.572 "uuid": "15aa295d-186a-5fdf-82c1-63cd2b22186d", 00:14:10.572 "is_configured": true, 00:14:10.572 "data_offset": 2048, 00:14:10.572 "data_size": 63488 00:14:10.572 }, 00:14:10.572 { 00:14:10.572 "name": "BaseBdev4", 00:14:10.572 "uuid": "5e70a0d7-dc23-57c0-bb42-031bd391bc6d", 00:14:10.572 "is_configured": true, 00:14:10.572 "data_offset": 2048, 00:14:10.572 "data_size": 63488 00:14:10.572 } 00:14:10.572 ] 00:14:10.572 }' 00:14:10.572 13:23:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:10.832 13:23:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:10.832 13:23:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:10.832 13:23:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:10.832 13:23:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:14:10.832 13:23:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:14:10.832 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:14:10.832 13:23:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:14:10.832 13:23:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:14:10.832 13:23:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:14:10.832 13:23:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:10.832 13:23:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.832 13:23:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:10.832 [2024-11-17 13:23:59.891206] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:10.832 [2024-11-17 13:23:59.924552] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:11.093 [2024-11-17 13:24:00.132192] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:14:11.093 [2024-11-17 13:24:00.132315] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:14:11.093 13:24:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.093 13:24:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:14:11.093 13:24:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:14:11.093 13:24:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:11.093 13:24:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:11.093 13:24:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:11.093 13:24:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:11.093 13:24:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:11.093 13:24:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:11.093 13:24:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.093 13:24:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:11.093 13:24:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:11.093 13:24:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.093 13:24:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:11.093 "name": "raid_bdev1", 00:14:11.093 "uuid": "56607d2c-fac0-4727-b82e-ceb6c238f74a", 00:14:11.093 "strip_size_kb": 0, 00:14:11.093 "state": "online", 00:14:11.093 "raid_level": "raid1", 00:14:11.093 "superblock": true, 00:14:11.093 "num_base_bdevs": 4, 00:14:11.093 "num_base_bdevs_discovered": 3, 00:14:11.093 "num_base_bdevs_operational": 3, 00:14:11.093 "process": { 00:14:11.093 "type": "rebuild", 00:14:11.093 "target": "spare", 00:14:11.093 "progress": { 00:14:11.093 "blocks": 14336, 00:14:11.093 "percent": 22 00:14:11.093 } 00:14:11.093 }, 00:14:11.093 "base_bdevs_list": [ 00:14:11.093 { 00:14:11.093 "name": "spare", 00:14:11.093 "uuid": "4990a7af-a5d2-5929-80c8-0e44ef6c4e58", 00:14:11.093 "is_configured": true, 00:14:11.093 "data_offset": 2048, 00:14:11.093 "data_size": 63488 00:14:11.093 }, 00:14:11.093 { 00:14:11.093 "name": null, 00:14:11.093 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:11.093 "is_configured": false, 00:14:11.093 "data_offset": 0, 00:14:11.093 "data_size": 63488 00:14:11.093 }, 00:14:11.093 { 00:14:11.093 "name": "BaseBdev3", 00:14:11.093 "uuid": "15aa295d-186a-5fdf-82c1-63cd2b22186d", 00:14:11.093 "is_configured": true, 00:14:11.093 "data_offset": 2048, 00:14:11.093 "data_size": 63488 00:14:11.093 }, 00:14:11.093 { 00:14:11.093 "name": "BaseBdev4", 00:14:11.093 "uuid": "5e70a0d7-dc23-57c0-bb42-031bd391bc6d", 00:14:11.093 "is_configured": true, 00:14:11.093 "data_offset": 2048, 00:14:11.093 "data_size": 63488 00:14:11.093 } 00:14:11.093 ] 00:14:11.093 }' 00:14:11.093 13:24:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:11.093 13:24:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:11.093 13:24:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:11.093 13:24:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:11.093 13:24:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=490 00:14:11.093 13:24:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:11.093 13:24:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:11.093 13:24:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:11.093 13:24:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:11.093 13:24:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:11.093 13:24:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:11.093 13:24:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:11.093 13:24:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:11.093 13:24:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.093 13:24:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:11.093 13:24:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.354 13:24:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:11.354 "name": "raid_bdev1", 00:14:11.354 "uuid": "56607d2c-fac0-4727-b82e-ceb6c238f74a", 00:14:11.354 "strip_size_kb": 0, 00:14:11.354 "state": "online", 00:14:11.354 "raid_level": "raid1", 00:14:11.354 "superblock": true, 00:14:11.354 "num_base_bdevs": 4, 00:14:11.354 "num_base_bdevs_discovered": 3, 00:14:11.354 "num_base_bdevs_operational": 3, 00:14:11.354 "process": { 00:14:11.354 "type": "rebuild", 00:14:11.354 "target": "spare", 00:14:11.354 "progress": { 00:14:11.354 "blocks": 16384, 00:14:11.354 "percent": 25 00:14:11.354 } 00:14:11.354 }, 00:14:11.354 "base_bdevs_list": [ 00:14:11.354 { 00:14:11.354 "name": "spare", 00:14:11.354 "uuid": "4990a7af-a5d2-5929-80c8-0e44ef6c4e58", 00:14:11.354 "is_configured": true, 00:14:11.354 "data_offset": 2048, 00:14:11.354 "data_size": 63488 00:14:11.354 }, 00:14:11.354 { 00:14:11.354 "name": null, 00:14:11.354 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:11.354 "is_configured": false, 00:14:11.354 "data_offset": 0, 00:14:11.354 "data_size": 63488 00:14:11.354 }, 00:14:11.354 { 00:14:11.354 "name": "BaseBdev3", 00:14:11.354 "uuid": "15aa295d-186a-5fdf-82c1-63cd2b22186d", 00:14:11.354 "is_configured": true, 00:14:11.354 "data_offset": 2048, 00:14:11.354 "data_size": 63488 00:14:11.354 }, 00:14:11.354 { 00:14:11.354 "name": "BaseBdev4", 00:14:11.354 "uuid": "5e70a0d7-dc23-57c0-bb42-031bd391bc6d", 00:14:11.354 "is_configured": true, 00:14:11.354 "data_offset": 2048, 00:14:11.354 "data_size": 63488 00:14:11.354 } 00:14:11.354 ] 00:14:11.354 }' 00:14:11.354 13:24:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:11.354 13:24:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:11.354 13:24:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:11.354 13:24:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:11.354 13:24:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:11.354 149.25 IOPS, 447.75 MiB/s [2024-11-17T13:24:00.578Z] [2024-11-17 13:24:00.474277] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:14:11.615 [2024-11-17 13:24:00.815033] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:14:11.875 [2024-11-17 13:24:01.037167] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:14:12.135 [2024-11-17 13:24:01.267199] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:14:12.395 [2024-11-17 13:24:01.372626] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:14:12.395 [2024-11-17 13:24:01.372957] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:14:12.395 13:24:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:12.395 13:24:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:12.395 13:24:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:12.395 13:24:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:12.395 13:24:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:12.395 13:24:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:12.395 13:24:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:12.395 13:24:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.395 13:24:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:12.395 13:24:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:12.395 13:24:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.395 129.00 IOPS, 387.00 MiB/s [2024-11-17T13:24:01.619Z] 13:24:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:12.395 "name": "raid_bdev1", 00:14:12.395 "uuid": "56607d2c-fac0-4727-b82e-ceb6c238f74a", 00:14:12.395 "strip_size_kb": 0, 00:14:12.395 "state": "online", 00:14:12.395 "raid_level": "raid1", 00:14:12.395 "superblock": true, 00:14:12.395 "num_base_bdevs": 4, 00:14:12.395 "num_base_bdevs_discovered": 3, 00:14:12.395 "num_base_bdevs_operational": 3, 00:14:12.395 "process": { 00:14:12.395 "type": "rebuild", 00:14:12.395 "target": "spare", 00:14:12.395 "progress": { 00:14:12.395 "blocks": 34816, 00:14:12.395 "percent": 54 00:14:12.395 } 00:14:12.395 }, 00:14:12.395 "base_bdevs_list": [ 00:14:12.395 { 00:14:12.395 "name": "spare", 00:14:12.395 "uuid": "4990a7af-a5d2-5929-80c8-0e44ef6c4e58", 00:14:12.395 "is_configured": true, 00:14:12.395 "data_offset": 2048, 00:14:12.395 "data_size": 63488 00:14:12.395 }, 00:14:12.395 { 00:14:12.395 "name": null, 00:14:12.395 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:12.395 "is_configured": false, 00:14:12.395 "data_offset": 0, 00:14:12.395 "data_size": 63488 00:14:12.395 }, 00:14:12.395 { 00:14:12.395 "name": "BaseBdev3", 00:14:12.395 "uuid": "15aa295d-186a-5fdf-82c1-63cd2b22186d", 00:14:12.395 "is_configured": true, 00:14:12.395 "data_offset": 2048, 00:14:12.395 "data_size": 63488 00:14:12.395 }, 00:14:12.395 { 00:14:12.395 "name": "BaseBdev4", 00:14:12.395 "uuid": "5e70a0d7-dc23-57c0-bb42-031bd391bc6d", 00:14:12.395 "is_configured": true, 00:14:12.395 "data_offset": 2048, 00:14:12.395 "data_size": 63488 00:14:12.395 } 00:14:12.395 ] 00:14:12.395 }' 00:14:12.395 13:24:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:12.395 13:24:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:12.395 13:24:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:12.395 13:24:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:12.395 13:24:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:12.964 [2024-11-17 13:24:02.050931] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:14:13.224 [2024-11-17 13:24:02.389603] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:14:13.484 116.17 IOPS, 348.50 MiB/s [2024-11-17T13:24:02.708Z] 13:24:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:13.484 13:24:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:13.484 13:24:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:13.484 13:24:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:13.484 13:24:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:13.484 13:24:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:13.484 13:24:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:13.484 13:24:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:13.484 13:24:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.484 13:24:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:13.484 13:24:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.484 13:24:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:13.484 "name": "raid_bdev1", 00:14:13.484 "uuid": "56607d2c-fac0-4727-b82e-ceb6c238f74a", 00:14:13.484 "strip_size_kb": 0, 00:14:13.484 "state": "online", 00:14:13.484 "raid_level": "raid1", 00:14:13.484 "superblock": true, 00:14:13.484 "num_base_bdevs": 4, 00:14:13.484 "num_base_bdevs_discovered": 3, 00:14:13.484 "num_base_bdevs_operational": 3, 00:14:13.484 "process": { 00:14:13.484 "type": "rebuild", 00:14:13.484 "target": "spare", 00:14:13.484 "progress": { 00:14:13.484 "blocks": 51200, 00:14:13.484 "percent": 80 00:14:13.484 } 00:14:13.484 }, 00:14:13.484 "base_bdevs_list": [ 00:14:13.484 { 00:14:13.484 "name": "spare", 00:14:13.484 "uuid": "4990a7af-a5d2-5929-80c8-0e44ef6c4e58", 00:14:13.484 "is_configured": true, 00:14:13.484 "data_offset": 2048, 00:14:13.484 "data_size": 63488 00:14:13.484 }, 00:14:13.484 { 00:14:13.484 "name": null, 00:14:13.484 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:13.484 "is_configured": false, 00:14:13.484 "data_offset": 0, 00:14:13.484 "data_size": 63488 00:14:13.484 }, 00:14:13.484 { 00:14:13.484 "name": "BaseBdev3", 00:14:13.484 "uuid": "15aa295d-186a-5fdf-82c1-63cd2b22186d", 00:14:13.484 "is_configured": true, 00:14:13.484 "data_offset": 2048, 00:14:13.484 "data_size": 63488 00:14:13.484 }, 00:14:13.484 { 00:14:13.484 "name": "BaseBdev4", 00:14:13.484 "uuid": "5e70a0d7-dc23-57c0-bb42-031bd391bc6d", 00:14:13.484 "is_configured": true, 00:14:13.484 "data_offset": 2048, 00:14:13.484 "data_size": 63488 00:14:13.484 } 00:14:13.484 ] 00:14:13.484 }' 00:14:13.484 13:24:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:13.484 13:24:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:13.484 13:24:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:13.484 13:24:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:13.484 13:24:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:14.067 [2024-11-17 13:24:03.149582] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:14.067 [2024-11-17 13:24:03.249395] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:14.067 [2024-11-17 13:24:03.251371] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:14.586 105.71 IOPS, 317.14 MiB/s [2024-11-17T13:24:03.810Z] 13:24:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:14.586 13:24:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:14.587 13:24:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:14.587 13:24:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:14.587 13:24:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:14.587 13:24:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:14.587 13:24:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:14.587 13:24:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:14.587 13:24:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.587 13:24:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:14.587 13:24:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.587 13:24:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:14.587 "name": "raid_bdev1", 00:14:14.587 "uuid": "56607d2c-fac0-4727-b82e-ceb6c238f74a", 00:14:14.587 "strip_size_kb": 0, 00:14:14.587 "state": "online", 00:14:14.587 "raid_level": "raid1", 00:14:14.587 "superblock": true, 00:14:14.587 "num_base_bdevs": 4, 00:14:14.587 "num_base_bdevs_discovered": 3, 00:14:14.587 "num_base_bdevs_operational": 3, 00:14:14.587 "base_bdevs_list": [ 00:14:14.587 { 00:14:14.587 "name": "spare", 00:14:14.587 "uuid": "4990a7af-a5d2-5929-80c8-0e44ef6c4e58", 00:14:14.587 "is_configured": true, 00:14:14.587 "data_offset": 2048, 00:14:14.587 "data_size": 63488 00:14:14.587 }, 00:14:14.587 { 00:14:14.587 "name": null, 00:14:14.587 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:14.587 "is_configured": false, 00:14:14.587 "data_offset": 0, 00:14:14.587 "data_size": 63488 00:14:14.587 }, 00:14:14.587 { 00:14:14.587 "name": "BaseBdev3", 00:14:14.587 "uuid": "15aa295d-186a-5fdf-82c1-63cd2b22186d", 00:14:14.587 "is_configured": true, 00:14:14.587 "data_offset": 2048, 00:14:14.587 "data_size": 63488 00:14:14.587 }, 00:14:14.587 { 00:14:14.587 "name": "BaseBdev4", 00:14:14.587 "uuid": "5e70a0d7-dc23-57c0-bb42-031bd391bc6d", 00:14:14.587 "is_configured": true, 00:14:14.587 "data_offset": 2048, 00:14:14.587 "data_size": 63488 00:14:14.587 } 00:14:14.587 ] 00:14:14.587 }' 00:14:14.587 13:24:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:14.587 13:24:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:14.587 13:24:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:14.587 13:24:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:14.587 13:24:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:14:14.587 13:24:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:14.587 13:24:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:14.587 13:24:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:14.587 13:24:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:14.587 13:24:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:14.587 13:24:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:14.587 13:24:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.587 13:24:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:14.587 13:24:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:14.847 13:24:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.847 13:24:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:14.847 "name": "raid_bdev1", 00:14:14.847 "uuid": "56607d2c-fac0-4727-b82e-ceb6c238f74a", 00:14:14.847 "strip_size_kb": 0, 00:14:14.847 "state": "online", 00:14:14.847 "raid_level": "raid1", 00:14:14.847 "superblock": true, 00:14:14.847 "num_base_bdevs": 4, 00:14:14.847 "num_base_bdevs_discovered": 3, 00:14:14.847 "num_base_bdevs_operational": 3, 00:14:14.847 "base_bdevs_list": [ 00:14:14.847 { 00:14:14.847 "name": "spare", 00:14:14.847 "uuid": "4990a7af-a5d2-5929-80c8-0e44ef6c4e58", 00:14:14.847 "is_configured": true, 00:14:14.847 "data_offset": 2048, 00:14:14.847 "data_size": 63488 00:14:14.847 }, 00:14:14.847 { 00:14:14.847 "name": null, 00:14:14.847 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:14.847 "is_configured": false, 00:14:14.847 "data_offset": 0, 00:14:14.847 "data_size": 63488 00:14:14.847 }, 00:14:14.847 { 00:14:14.847 "name": "BaseBdev3", 00:14:14.847 "uuid": "15aa295d-186a-5fdf-82c1-63cd2b22186d", 00:14:14.847 "is_configured": true, 00:14:14.847 "data_offset": 2048, 00:14:14.847 "data_size": 63488 00:14:14.847 }, 00:14:14.847 { 00:14:14.847 "name": "BaseBdev4", 00:14:14.847 "uuid": "5e70a0d7-dc23-57c0-bb42-031bd391bc6d", 00:14:14.847 "is_configured": true, 00:14:14.847 "data_offset": 2048, 00:14:14.847 "data_size": 63488 00:14:14.847 } 00:14:14.847 ] 00:14:14.847 }' 00:14:14.848 13:24:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:14.848 13:24:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:14.848 13:24:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:14.848 13:24:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:14.848 13:24:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:14.848 13:24:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:14.848 13:24:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:14.848 13:24:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:14.848 13:24:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:14.848 13:24:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:14.848 13:24:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:14.848 13:24:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:14.848 13:24:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:14.848 13:24:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:14.848 13:24:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:14.848 13:24:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.848 13:24:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:14.848 13:24:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:14.848 13:24:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.848 13:24:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:14.848 "name": "raid_bdev1", 00:14:14.848 "uuid": "56607d2c-fac0-4727-b82e-ceb6c238f74a", 00:14:14.848 "strip_size_kb": 0, 00:14:14.848 "state": "online", 00:14:14.848 "raid_level": "raid1", 00:14:14.848 "superblock": true, 00:14:14.848 "num_base_bdevs": 4, 00:14:14.848 "num_base_bdevs_discovered": 3, 00:14:14.848 "num_base_bdevs_operational": 3, 00:14:14.848 "base_bdevs_list": [ 00:14:14.848 { 00:14:14.848 "name": "spare", 00:14:14.848 "uuid": "4990a7af-a5d2-5929-80c8-0e44ef6c4e58", 00:14:14.848 "is_configured": true, 00:14:14.848 "data_offset": 2048, 00:14:14.848 "data_size": 63488 00:14:14.848 }, 00:14:14.848 { 00:14:14.848 "name": null, 00:14:14.848 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:14.848 "is_configured": false, 00:14:14.848 "data_offset": 0, 00:14:14.848 "data_size": 63488 00:14:14.848 }, 00:14:14.848 { 00:14:14.848 "name": "BaseBdev3", 00:14:14.848 "uuid": "15aa295d-186a-5fdf-82c1-63cd2b22186d", 00:14:14.848 "is_configured": true, 00:14:14.848 "data_offset": 2048, 00:14:14.848 "data_size": 63488 00:14:14.848 }, 00:14:14.848 { 00:14:14.848 "name": "BaseBdev4", 00:14:14.848 "uuid": "5e70a0d7-dc23-57c0-bb42-031bd391bc6d", 00:14:14.848 "is_configured": true, 00:14:14.848 "data_offset": 2048, 00:14:14.848 "data_size": 63488 00:14:14.848 } 00:14:14.848 ] 00:14:14.848 }' 00:14:14.848 13:24:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:14.848 13:24:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:15.417 13:24:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:15.418 13:24:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.418 13:24:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:15.418 [2024-11-17 13:24:04.386031] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:15.418 [2024-11-17 13:24:04.386065] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:15.418 97.12 IOPS, 291.38 MiB/s 00:14:15.418 Latency(us) 00:14:15.418 [2024-11-17T13:24:04.642Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:15.418 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:14:15.418 raid_bdev1 : 8.03 97.05 291.16 0.00 0.00 14312.45 298.70 118136.51 00:14:15.418 [2024-11-17T13:24:04.642Z] =================================================================================================================== 00:14:15.418 [2024-11-17T13:24:04.642Z] Total : 97.05 291.16 0.00 0.00 14312.45 298.70 118136.51 00:14:15.418 { 00:14:15.418 "results": [ 00:14:15.418 { 00:14:15.418 "job": "raid_bdev1", 00:14:15.418 "core_mask": "0x1", 00:14:15.418 "workload": "randrw", 00:14:15.418 "percentage": 50, 00:14:15.418 "status": "finished", 00:14:15.418 "queue_depth": 2, 00:14:15.418 "io_size": 3145728, 00:14:15.418 "runtime": 8.026638, 00:14:15.418 "iops": 97.05184163033141, 00:14:15.418 "mibps": 291.15552489099423, 00:14:15.418 "io_failed": 0, 00:14:15.418 "io_timeout": 0, 00:14:15.418 "avg_latency_us": 14312.4479598186, 00:14:15.418 "min_latency_us": 298.70393013100437, 00:14:15.418 "max_latency_us": 118136.51004366812 00:14:15.418 } 00:14:15.418 ], 00:14:15.418 "core_count": 1 00:14:15.418 } 00:14:15.418 [2024-11-17 13:24:04.482887] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:15.418 [2024-11-17 13:24:04.482934] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:15.418 [2024-11-17 13:24:04.483031] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:15.418 [2024-11-17 13:24:04.483041] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:15.418 13:24:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.418 13:24:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:15.418 13:24:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.418 13:24:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:14:15.418 13:24:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:15.418 13:24:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.418 13:24:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:15.418 13:24:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:15.418 13:24:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:14:15.418 13:24:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:14:15.418 13:24:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:15.418 13:24:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:14:15.418 13:24:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:15.418 13:24:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:15.418 13:24:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:15.418 13:24:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:14:15.418 13:24:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:15.418 13:24:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:15.418 13:24:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:14:15.677 /dev/nbd0 00:14:15.677 13:24:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:15.677 13:24:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:15.677 13:24:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:15.677 13:24:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:14:15.677 13:24:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:15.677 13:24:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:15.677 13:24:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:15.677 13:24:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:14:15.677 13:24:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:15.677 13:24:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:15.677 13:24:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:15.677 1+0 records in 00:14:15.677 1+0 records out 00:14:15.677 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000368125 s, 11.1 MB/s 00:14:15.677 13:24:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:15.677 13:24:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:14:15.677 13:24:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:15.677 13:24:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:15.677 13:24:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:14:15.677 13:24:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:15.677 13:24:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:15.677 13:24:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:15.677 13:24:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:14:15.677 13:24:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@728 -- # continue 00:14:15.677 13:24:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:15.678 13:24:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:14:15.678 13:24:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:14:15.678 13:24:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:15.678 13:24:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:14:15.678 13:24:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:15.678 13:24:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:14:15.678 13:24:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:15.678 13:24:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:14:15.678 13:24:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:15.678 13:24:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:15.678 13:24:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:14:15.937 /dev/nbd1 00:14:15.937 13:24:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:15.937 13:24:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:15.937 13:24:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:15.937 13:24:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:14:15.937 13:24:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:15.937 13:24:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:15.937 13:24:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:15.937 13:24:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:14:15.937 13:24:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:15.937 13:24:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:15.937 13:24:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:15.937 1+0 records in 00:14:15.937 1+0 records out 00:14:15.937 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000248435 s, 16.5 MB/s 00:14:15.937 13:24:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:15.937 13:24:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:14:15.937 13:24:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:15.937 13:24:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:15.937 13:24:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:14:15.938 13:24:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:15.938 13:24:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:15.938 13:24:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:14:16.197 13:24:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:14:16.197 13:24:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:16.197 13:24:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:14:16.197 13:24:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:16.197 13:24:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:14:16.197 13:24:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:16.197 13:24:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:16.197 13:24:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:16.197 13:24:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:16.197 13:24:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:16.197 13:24:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:16.197 13:24:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:16.197 13:24:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:16.197 13:24:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:14:16.197 13:24:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:16.197 13:24:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:16.197 13:24:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:14:16.197 13:24:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:14:16.197 13:24:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:16.197 13:24:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:14:16.197 13:24:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:16.197 13:24:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:14:16.197 13:24:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:16.197 13:24:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:14:16.197 13:24:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:16.197 13:24:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:16.197 13:24:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:14:16.456 /dev/nbd1 00:14:16.456 13:24:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:16.457 13:24:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:16.457 13:24:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:16.457 13:24:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:14:16.457 13:24:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:16.457 13:24:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:16.457 13:24:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:16.457 13:24:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:14:16.457 13:24:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:16.457 13:24:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:16.457 13:24:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:16.457 1+0 records in 00:14:16.457 1+0 records out 00:14:16.457 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000218348 s, 18.8 MB/s 00:14:16.457 13:24:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:16.457 13:24:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:14:16.457 13:24:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:16.457 13:24:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:16.457 13:24:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:14:16.457 13:24:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:16.457 13:24:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:16.457 13:24:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:14:16.716 13:24:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:14:16.716 13:24:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:16.716 13:24:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:14:16.716 13:24:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:16.716 13:24:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:14:16.716 13:24:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:16.716 13:24:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:16.716 13:24:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:16.716 13:24:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:16.716 13:24:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:16.716 13:24:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:16.716 13:24:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:16.716 13:24:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:16.716 13:24:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:14:16.716 13:24:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:16.716 13:24:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:16.716 13:24:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:16.976 13:24:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:16.976 13:24:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:16.976 13:24:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:14:16.976 13:24:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:16.976 13:24:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:16.976 13:24:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:16.976 13:24:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:16.976 13:24:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:16.976 13:24:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:16.976 13:24:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:16.976 13:24:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:16.976 13:24:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:14:16.976 13:24:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:16.976 13:24:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:14:16.976 13:24:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:14:16.976 13:24:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.976 13:24:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:16.976 13:24:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.976 13:24:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:16.976 13:24:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.976 13:24:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:16.976 [2024-11-17 13:24:06.156478] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:16.976 [2024-11-17 13:24:06.156533] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:16.976 [2024-11-17 13:24:06.156555] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:14:16.976 [2024-11-17 13:24:06.156564] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:16.976 [2024-11-17 13:24:06.158730] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:16.976 [2024-11-17 13:24:06.158815] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:16.976 [2024-11-17 13:24:06.158947] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:16.976 [2024-11-17 13:24:06.159001] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:16.976 [2024-11-17 13:24:06.159152] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:16.976 [2024-11-17 13:24:06.159274] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:16.976 spare 00:14:16.976 13:24:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.976 13:24:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:14:16.976 13:24:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.976 13:24:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:17.237 [2024-11-17 13:24:06.259179] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:14:17.237 [2024-11-17 13:24:06.259224] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:17.237 [2024-11-17 13:24:06.259558] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037160 00:14:17.237 [2024-11-17 13:24:06.259743] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:14:17.237 [2024-11-17 13:24:06.259758] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:14:17.237 [2024-11-17 13:24:06.259948] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:17.237 13:24:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.237 13:24:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:17.237 13:24:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:17.237 13:24:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:17.237 13:24:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:17.237 13:24:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:17.237 13:24:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:17.237 13:24:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:17.237 13:24:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:17.237 13:24:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:17.237 13:24:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:17.237 13:24:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:17.237 13:24:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.237 13:24:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:17.237 13:24:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:17.237 13:24:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.237 13:24:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:17.237 "name": "raid_bdev1", 00:14:17.237 "uuid": "56607d2c-fac0-4727-b82e-ceb6c238f74a", 00:14:17.237 "strip_size_kb": 0, 00:14:17.237 "state": "online", 00:14:17.237 "raid_level": "raid1", 00:14:17.237 "superblock": true, 00:14:17.237 "num_base_bdevs": 4, 00:14:17.237 "num_base_bdevs_discovered": 3, 00:14:17.237 "num_base_bdevs_operational": 3, 00:14:17.237 "base_bdevs_list": [ 00:14:17.237 { 00:14:17.237 "name": "spare", 00:14:17.237 "uuid": "4990a7af-a5d2-5929-80c8-0e44ef6c4e58", 00:14:17.237 "is_configured": true, 00:14:17.237 "data_offset": 2048, 00:14:17.237 "data_size": 63488 00:14:17.237 }, 00:14:17.237 { 00:14:17.237 "name": null, 00:14:17.237 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:17.237 "is_configured": false, 00:14:17.237 "data_offset": 2048, 00:14:17.237 "data_size": 63488 00:14:17.237 }, 00:14:17.237 { 00:14:17.237 "name": "BaseBdev3", 00:14:17.237 "uuid": "15aa295d-186a-5fdf-82c1-63cd2b22186d", 00:14:17.237 "is_configured": true, 00:14:17.237 "data_offset": 2048, 00:14:17.237 "data_size": 63488 00:14:17.237 }, 00:14:17.237 { 00:14:17.237 "name": "BaseBdev4", 00:14:17.237 "uuid": "5e70a0d7-dc23-57c0-bb42-031bd391bc6d", 00:14:17.237 "is_configured": true, 00:14:17.237 "data_offset": 2048, 00:14:17.237 "data_size": 63488 00:14:17.237 } 00:14:17.237 ] 00:14:17.237 }' 00:14:17.238 13:24:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:17.238 13:24:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:17.498 13:24:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:17.498 13:24:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:17.498 13:24:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:17.498 13:24:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:17.498 13:24:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:17.498 13:24:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:17.498 13:24:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:17.498 13:24:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.498 13:24:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:17.498 13:24:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.498 13:24:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:17.498 "name": "raid_bdev1", 00:14:17.498 "uuid": "56607d2c-fac0-4727-b82e-ceb6c238f74a", 00:14:17.498 "strip_size_kb": 0, 00:14:17.498 "state": "online", 00:14:17.498 "raid_level": "raid1", 00:14:17.498 "superblock": true, 00:14:17.498 "num_base_bdevs": 4, 00:14:17.498 "num_base_bdevs_discovered": 3, 00:14:17.498 "num_base_bdevs_operational": 3, 00:14:17.498 "base_bdevs_list": [ 00:14:17.498 { 00:14:17.498 "name": "spare", 00:14:17.498 "uuid": "4990a7af-a5d2-5929-80c8-0e44ef6c4e58", 00:14:17.498 "is_configured": true, 00:14:17.498 "data_offset": 2048, 00:14:17.498 "data_size": 63488 00:14:17.498 }, 00:14:17.498 { 00:14:17.498 "name": null, 00:14:17.498 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:17.498 "is_configured": false, 00:14:17.498 "data_offset": 2048, 00:14:17.498 "data_size": 63488 00:14:17.498 }, 00:14:17.498 { 00:14:17.498 "name": "BaseBdev3", 00:14:17.498 "uuid": "15aa295d-186a-5fdf-82c1-63cd2b22186d", 00:14:17.498 "is_configured": true, 00:14:17.498 "data_offset": 2048, 00:14:17.498 "data_size": 63488 00:14:17.498 }, 00:14:17.498 { 00:14:17.498 "name": "BaseBdev4", 00:14:17.498 "uuid": "5e70a0d7-dc23-57c0-bb42-031bd391bc6d", 00:14:17.498 "is_configured": true, 00:14:17.498 "data_offset": 2048, 00:14:17.498 "data_size": 63488 00:14:17.498 } 00:14:17.498 ] 00:14:17.498 }' 00:14:17.498 13:24:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:17.758 13:24:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:17.758 13:24:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:17.758 13:24:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:17.758 13:24:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:14:17.758 13:24:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:17.758 13:24:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.758 13:24:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:17.758 13:24:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.758 13:24:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:14:17.758 13:24:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:17.758 13:24:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.758 13:24:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:17.758 [2024-11-17 13:24:06.843465] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:17.758 13:24:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.758 13:24:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:17.758 13:24:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:17.758 13:24:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:17.758 13:24:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:17.758 13:24:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:17.758 13:24:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:17.758 13:24:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:17.758 13:24:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:17.758 13:24:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:17.758 13:24:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:17.758 13:24:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:17.758 13:24:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:17.758 13:24:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.758 13:24:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:17.758 13:24:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.758 13:24:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:17.758 "name": "raid_bdev1", 00:14:17.758 "uuid": "56607d2c-fac0-4727-b82e-ceb6c238f74a", 00:14:17.758 "strip_size_kb": 0, 00:14:17.758 "state": "online", 00:14:17.758 "raid_level": "raid1", 00:14:17.758 "superblock": true, 00:14:17.758 "num_base_bdevs": 4, 00:14:17.758 "num_base_bdevs_discovered": 2, 00:14:17.758 "num_base_bdevs_operational": 2, 00:14:17.758 "base_bdevs_list": [ 00:14:17.758 { 00:14:17.758 "name": null, 00:14:17.758 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:17.758 "is_configured": false, 00:14:17.758 "data_offset": 0, 00:14:17.758 "data_size": 63488 00:14:17.758 }, 00:14:17.758 { 00:14:17.758 "name": null, 00:14:17.758 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:17.758 "is_configured": false, 00:14:17.758 "data_offset": 2048, 00:14:17.758 "data_size": 63488 00:14:17.758 }, 00:14:17.758 { 00:14:17.758 "name": "BaseBdev3", 00:14:17.758 "uuid": "15aa295d-186a-5fdf-82c1-63cd2b22186d", 00:14:17.758 "is_configured": true, 00:14:17.758 "data_offset": 2048, 00:14:17.758 "data_size": 63488 00:14:17.758 }, 00:14:17.758 { 00:14:17.758 "name": "BaseBdev4", 00:14:17.758 "uuid": "5e70a0d7-dc23-57c0-bb42-031bd391bc6d", 00:14:17.758 "is_configured": true, 00:14:17.758 "data_offset": 2048, 00:14:17.758 "data_size": 63488 00:14:17.758 } 00:14:17.758 ] 00:14:17.758 }' 00:14:17.758 13:24:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:17.758 13:24:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:18.018 13:24:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:18.018 13:24:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.018 13:24:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:18.018 [2024-11-17 13:24:07.222896] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:18.018 [2024-11-17 13:24:07.223202] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:14:18.018 [2024-11-17 13:24:07.223296] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:18.018 [2024-11-17 13:24:07.223412] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:18.018 [2024-11-17 13:24:07.238191] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037230 00:14:18.018 13:24:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.018 13:24:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:14:18.018 [2024-11-17 13:24:07.240149] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:19.395 13:24:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:19.395 13:24:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:19.395 13:24:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:19.395 13:24:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:19.395 13:24:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:19.395 13:24:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:19.395 13:24:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.395 13:24:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:19.396 13:24:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:19.396 13:24:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.396 13:24:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:19.396 "name": "raid_bdev1", 00:14:19.396 "uuid": "56607d2c-fac0-4727-b82e-ceb6c238f74a", 00:14:19.396 "strip_size_kb": 0, 00:14:19.396 "state": "online", 00:14:19.396 "raid_level": "raid1", 00:14:19.396 "superblock": true, 00:14:19.396 "num_base_bdevs": 4, 00:14:19.396 "num_base_bdevs_discovered": 3, 00:14:19.396 "num_base_bdevs_operational": 3, 00:14:19.396 "process": { 00:14:19.396 "type": "rebuild", 00:14:19.396 "target": "spare", 00:14:19.396 "progress": { 00:14:19.396 "blocks": 20480, 00:14:19.396 "percent": 32 00:14:19.396 } 00:14:19.396 }, 00:14:19.396 "base_bdevs_list": [ 00:14:19.396 { 00:14:19.396 "name": "spare", 00:14:19.396 "uuid": "4990a7af-a5d2-5929-80c8-0e44ef6c4e58", 00:14:19.396 "is_configured": true, 00:14:19.396 "data_offset": 2048, 00:14:19.396 "data_size": 63488 00:14:19.396 }, 00:14:19.396 { 00:14:19.396 "name": null, 00:14:19.396 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:19.396 "is_configured": false, 00:14:19.396 "data_offset": 2048, 00:14:19.396 "data_size": 63488 00:14:19.396 }, 00:14:19.396 { 00:14:19.396 "name": "BaseBdev3", 00:14:19.396 "uuid": "15aa295d-186a-5fdf-82c1-63cd2b22186d", 00:14:19.396 "is_configured": true, 00:14:19.396 "data_offset": 2048, 00:14:19.396 "data_size": 63488 00:14:19.396 }, 00:14:19.396 { 00:14:19.396 "name": "BaseBdev4", 00:14:19.396 "uuid": "5e70a0d7-dc23-57c0-bb42-031bd391bc6d", 00:14:19.396 "is_configured": true, 00:14:19.396 "data_offset": 2048, 00:14:19.396 "data_size": 63488 00:14:19.396 } 00:14:19.396 ] 00:14:19.396 }' 00:14:19.396 13:24:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:19.396 13:24:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:19.396 13:24:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:19.396 13:24:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:19.396 13:24:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:14:19.396 13:24:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.396 13:24:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:19.396 [2024-11-17 13:24:08.399680] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:19.396 [2024-11-17 13:24:08.445738] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:19.396 [2024-11-17 13:24:08.445848] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:19.396 [2024-11-17 13:24:08.445870] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:19.396 [2024-11-17 13:24:08.445886] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:19.396 13:24:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.396 13:24:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:19.396 13:24:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:19.396 13:24:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:19.396 13:24:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:19.396 13:24:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:19.396 13:24:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:19.396 13:24:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:19.396 13:24:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:19.396 13:24:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:19.396 13:24:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:19.396 13:24:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:19.396 13:24:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:19.396 13:24:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.396 13:24:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:19.396 13:24:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.396 13:24:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:19.396 "name": "raid_bdev1", 00:14:19.396 "uuid": "56607d2c-fac0-4727-b82e-ceb6c238f74a", 00:14:19.396 "strip_size_kb": 0, 00:14:19.396 "state": "online", 00:14:19.396 "raid_level": "raid1", 00:14:19.396 "superblock": true, 00:14:19.396 "num_base_bdevs": 4, 00:14:19.396 "num_base_bdevs_discovered": 2, 00:14:19.396 "num_base_bdevs_operational": 2, 00:14:19.396 "base_bdevs_list": [ 00:14:19.396 { 00:14:19.396 "name": null, 00:14:19.396 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:19.396 "is_configured": false, 00:14:19.396 "data_offset": 0, 00:14:19.396 "data_size": 63488 00:14:19.396 }, 00:14:19.396 { 00:14:19.396 "name": null, 00:14:19.396 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:19.396 "is_configured": false, 00:14:19.396 "data_offset": 2048, 00:14:19.396 "data_size": 63488 00:14:19.396 }, 00:14:19.396 { 00:14:19.396 "name": "BaseBdev3", 00:14:19.396 "uuid": "15aa295d-186a-5fdf-82c1-63cd2b22186d", 00:14:19.396 "is_configured": true, 00:14:19.396 "data_offset": 2048, 00:14:19.396 "data_size": 63488 00:14:19.396 }, 00:14:19.396 { 00:14:19.396 "name": "BaseBdev4", 00:14:19.396 "uuid": "5e70a0d7-dc23-57c0-bb42-031bd391bc6d", 00:14:19.396 "is_configured": true, 00:14:19.396 "data_offset": 2048, 00:14:19.396 "data_size": 63488 00:14:19.396 } 00:14:19.396 ] 00:14:19.396 }' 00:14:19.396 13:24:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:19.396 13:24:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:19.962 13:24:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:19.962 13:24:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.962 13:24:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:19.962 [2024-11-17 13:24:08.922229] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:19.962 [2024-11-17 13:24:08.922359] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:19.962 [2024-11-17 13:24:08.922407] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:14:19.962 [2024-11-17 13:24:08.922436] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:19.962 [2024-11-17 13:24:08.922981] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:19.962 [2024-11-17 13:24:08.923041] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:19.962 [2024-11-17 13:24:08.923187] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:19.962 [2024-11-17 13:24:08.923248] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:14:19.962 [2024-11-17 13:24:08.923304] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:19.962 [2024-11-17 13:24:08.923380] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:19.962 [2024-11-17 13:24:08.938089] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037300 00:14:19.962 spare 00:14:19.962 13:24:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.962 13:24:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:14:19.962 [2024-11-17 13:24:08.939946] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:20.900 13:24:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:20.900 13:24:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:20.900 13:24:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:20.900 13:24:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:20.900 13:24:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:20.900 13:24:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:20.900 13:24:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.900 13:24:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:20.900 13:24:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:20.900 13:24:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.900 13:24:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:20.900 "name": "raid_bdev1", 00:14:20.900 "uuid": "56607d2c-fac0-4727-b82e-ceb6c238f74a", 00:14:20.900 "strip_size_kb": 0, 00:14:20.900 "state": "online", 00:14:20.900 "raid_level": "raid1", 00:14:20.900 "superblock": true, 00:14:20.900 "num_base_bdevs": 4, 00:14:20.900 "num_base_bdevs_discovered": 3, 00:14:20.900 "num_base_bdevs_operational": 3, 00:14:20.900 "process": { 00:14:20.900 "type": "rebuild", 00:14:20.900 "target": "spare", 00:14:20.900 "progress": { 00:14:20.900 "blocks": 20480, 00:14:20.900 "percent": 32 00:14:20.900 } 00:14:20.900 }, 00:14:20.900 "base_bdevs_list": [ 00:14:20.900 { 00:14:20.900 "name": "spare", 00:14:20.900 "uuid": "4990a7af-a5d2-5929-80c8-0e44ef6c4e58", 00:14:20.900 "is_configured": true, 00:14:20.900 "data_offset": 2048, 00:14:20.900 "data_size": 63488 00:14:20.900 }, 00:14:20.900 { 00:14:20.900 "name": null, 00:14:20.900 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:20.900 "is_configured": false, 00:14:20.900 "data_offset": 2048, 00:14:20.900 "data_size": 63488 00:14:20.900 }, 00:14:20.900 { 00:14:20.900 "name": "BaseBdev3", 00:14:20.900 "uuid": "15aa295d-186a-5fdf-82c1-63cd2b22186d", 00:14:20.900 "is_configured": true, 00:14:20.900 "data_offset": 2048, 00:14:20.900 "data_size": 63488 00:14:20.900 }, 00:14:20.900 { 00:14:20.900 "name": "BaseBdev4", 00:14:20.900 "uuid": "5e70a0d7-dc23-57c0-bb42-031bd391bc6d", 00:14:20.900 "is_configured": true, 00:14:20.900 "data_offset": 2048, 00:14:20.900 "data_size": 63488 00:14:20.900 } 00:14:20.900 ] 00:14:20.900 }' 00:14:20.900 13:24:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:20.900 13:24:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:20.900 13:24:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:20.900 13:24:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:20.901 13:24:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:14:20.901 13:24:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.901 13:24:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:20.901 [2024-11-17 13:24:10.075405] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:21.161 [2024-11-17 13:24:10.145544] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:21.161 [2024-11-17 13:24:10.145653] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:21.161 [2024-11-17 13:24:10.145671] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:21.161 [2024-11-17 13:24:10.145681] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:21.161 13:24:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.161 13:24:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:21.161 13:24:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:21.161 13:24:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:21.161 13:24:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:21.161 13:24:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:21.161 13:24:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:21.161 13:24:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:21.161 13:24:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:21.161 13:24:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:21.161 13:24:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:21.161 13:24:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:21.161 13:24:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:21.161 13:24:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.161 13:24:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:21.161 13:24:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.161 13:24:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:21.161 "name": "raid_bdev1", 00:14:21.161 "uuid": "56607d2c-fac0-4727-b82e-ceb6c238f74a", 00:14:21.161 "strip_size_kb": 0, 00:14:21.161 "state": "online", 00:14:21.161 "raid_level": "raid1", 00:14:21.161 "superblock": true, 00:14:21.161 "num_base_bdevs": 4, 00:14:21.161 "num_base_bdevs_discovered": 2, 00:14:21.161 "num_base_bdevs_operational": 2, 00:14:21.161 "base_bdevs_list": [ 00:14:21.161 { 00:14:21.161 "name": null, 00:14:21.161 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:21.161 "is_configured": false, 00:14:21.161 "data_offset": 0, 00:14:21.161 "data_size": 63488 00:14:21.161 }, 00:14:21.161 { 00:14:21.161 "name": null, 00:14:21.161 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:21.161 "is_configured": false, 00:14:21.161 "data_offset": 2048, 00:14:21.161 "data_size": 63488 00:14:21.161 }, 00:14:21.161 { 00:14:21.161 "name": "BaseBdev3", 00:14:21.161 "uuid": "15aa295d-186a-5fdf-82c1-63cd2b22186d", 00:14:21.161 "is_configured": true, 00:14:21.161 "data_offset": 2048, 00:14:21.161 "data_size": 63488 00:14:21.161 }, 00:14:21.161 { 00:14:21.161 "name": "BaseBdev4", 00:14:21.161 "uuid": "5e70a0d7-dc23-57c0-bb42-031bd391bc6d", 00:14:21.161 "is_configured": true, 00:14:21.161 "data_offset": 2048, 00:14:21.161 "data_size": 63488 00:14:21.161 } 00:14:21.161 ] 00:14:21.161 }' 00:14:21.161 13:24:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:21.161 13:24:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:21.419 13:24:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:21.419 13:24:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:21.419 13:24:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:21.419 13:24:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:21.419 13:24:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:21.679 13:24:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:21.679 13:24:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:21.679 13:24:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.679 13:24:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:21.679 13:24:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.679 13:24:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:21.679 "name": "raid_bdev1", 00:14:21.679 "uuid": "56607d2c-fac0-4727-b82e-ceb6c238f74a", 00:14:21.679 "strip_size_kb": 0, 00:14:21.679 "state": "online", 00:14:21.679 "raid_level": "raid1", 00:14:21.679 "superblock": true, 00:14:21.679 "num_base_bdevs": 4, 00:14:21.679 "num_base_bdevs_discovered": 2, 00:14:21.679 "num_base_bdevs_operational": 2, 00:14:21.679 "base_bdevs_list": [ 00:14:21.679 { 00:14:21.679 "name": null, 00:14:21.679 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:21.679 "is_configured": false, 00:14:21.679 "data_offset": 0, 00:14:21.679 "data_size": 63488 00:14:21.679 }, 00:14:21.679 { 00:14:21.679 "name": null, 00:14:21.679 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:21.679 "is_configured": false, 00:14:21.679 "data_offset": 2048, 00:14:21.679 "data_size": 63488 00:14:21.679 }, 00:14:21.679 { 00:14:21.679 "name": "BaseBdev3", 00:14:21.679 "uuid": "15aa295d-186a-5fdf-82c1-63cd2b22186d", 00:14:21.679 "is_configured": true, 00:14:21.679 "data_offset": 2048, 00:14:21.679 "data_size": 63488 00:14:21.679 }, 00:14:21.679 { 00:14:21.679 "name": "BaseBdev4", 00:14:21.679 "uuid": "5e70a0d7-dc23-57c0-bb42-031bd391bc6d", 00:14:21.679 "is_configured": true, 00:14:21.679 "data_offset": 2048, 00:14:21.679 "data_size": 63488 00:14:21.679 } 00:14:21.679 ] 00:14:21.679 }' 00:14:21.679 13:24:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:21.679 13:24:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:21.679 13:24:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:21.679 13:24:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:21.679 13:24:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:14:21.679 13:24:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.679 13:24:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:21.679 13:24:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.679 13:24:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:21.679 13:24:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.679 13:24:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:21.679 [2024-11-17 13:24:10.785740] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:21.679 [2024-11-17 13:24:10.785802] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:21.679 [2024-11-17 13:24:10.785822] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cc80 00:14:21.679 [2024-11-17 13:24:10.785833] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:21.679 [2024-11-17 13:24:10.786325] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:21.679 [2024-11-17 13:24:10.786360] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:21.679 [2024-11-17 13:24:10.786443] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:14:21.679 [2024-11-17 13:24:10.786466] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:14:21.679 [2024-11-17 13:24:10.786474] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:21.679 [2024-11-17 13:24:10.786495] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:14:21.679 BaseBdev1 00:14:21.679 13:24:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.679 13:24:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:14:22.618 13:24:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:22.618 13:24:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:22.618 13:24:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:22.618 13:24:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:22.618 13:24:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:22.618 13:24:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:22.618 13:24:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:22.618 13:24:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:22.618 13:24:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:22.618 13:24:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:22.618 13:24:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:22.618 13:24:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:22.618 13:24:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.618 13:24:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:22.618 13:24:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.877 13:24:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:22.877 "name": "raid_bdev1", 00:14:22.877 "uuid": "56607d2c-fac0-4727-b82e-ceb6c238f74a", 00:14:22.877 "strip_size_kb": 0, 00:14:22.877 "state": "online", 00:14:22.877 "raid_level": "raid1", 00:14:22.877 "superblock": true, 00:14:22.878 "num_base_bdevs": 4, 00:14:22.878 "num_base_bdevs_discovered": 2, 00:14:22.878 "num_base_bdevs_operational": 2, 00:14:22.878 "base_bdevs_list": [ 00:14:22.878 { 00:14:22.878 "name": null, 00:14:22.878 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:22.878 "is_configured": false, 00:14:22.878 "data_offset": 0, 00:14:22.878 "data_size": 63488 00:14:22.878 }, 00:14:22.878 { 00:14:22.878 "name": null, 00:14:22.878 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:22.878 "is_configured": false, 00:14:22.878 "data_offset": 2048, 00:14:22.878 "data_size": 63488 00:14:22.878 }, 00:14:22.878 { 00:14:22.878 "name": "BaseBdev3", 00:14:22.878 "uuid": "15aa295d-186a-5fdf-82c1-63cd2b22186d", 00:14:22.878 "is_configured": true, 00:14:22.878 "data_offset": 2048, 00:14:22.878 "data_size": 63488 00:14:22.878 }, 00:14:22.878 { 00:14:22.878 "name": "BaseBdev4", 00:14:22.878 "uuid": "5e70a0d7-dc23-57c0-bb42-031bd391bc6d", 00:14:22.878 "is_configured": true, 00:14:22.878 "data_offset": 2048, 00:14:22.878 "data_size": 63488 00:14:22.878 } 00:14:22.878 ] 00:14:22.878 }' 00:14:22.878 13:24:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:22.878 13:24:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:23.137 13:24:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:23.137 13:24:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:23.137 13:24:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:23.137 13:24:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:23.137 13:24:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:23.137 13:24:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:23.137 13:24:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:23.137 13:24:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.137 13:24:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:23.137 13:24:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.137 13:24:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:23.137 "name": "raid_bdev1", 00:14:23.137 "uuid": "56607d2c-fac0-4727-b82e-ceb6c238f74a", 00:14:23.137 "strip_size_kb": 0, 00:14:23.137 "state": "online", 00:14:23.137 "raid_level": "raid1", 00:14:23.137 "superblock": true, 00:14:23.137 "num_base_bdevs": 4, 00:14:23.137 "num_base_bdevs_discovered": 2, 00:14:23.137 "num_base_bdevs_operational": 2, 00:14:23.137 "base_bdevs_list": [ 00:14:23.137 { 00:14:23.137 "name": null, 00:14:23.137 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:23.137 "is_configured": false, 00:14:23.137 "data_offset": 0, 00:14:23.137 "data_size": 63488 00:14:23.137 }, 00:14:23.137 { 00:14:23.137 "name": null, 00:14:23.137 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:23.137 "is_configured": false, 00:14:23.137 "data_offset": 2048, 00:14:23.137 "data_size": 63488 00:14:23.137 }, 00:14:23.137 { 00:14:23.137 "name": "BaseBdev3", 00:14:23.137 "uuid": "15aa295d-186a-5fdf-82c1-63cd2b22186d", 00:14:23.137 "is_configured": true, 00:14:23.137 "data_offset": 2048, 00:14:23.137 "data_size": 63488 00:14:23.137 }, 00:14:23.137 { 00:14:23.137 "name": "BaseBdev4", 00:14:23.137 "uuid": "5e70a0d7-dc23-57c0-bb42-031bd391bc6d", 00:14:23.137 "is_configured": true, 00:14:23.137 "data_offset": 2048, 00:14:23.137 "data_size": 63488 00:14:23.137 } 00:14:23.137 ] 00:14:23.137 }' 00:14:23.137 13:24:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:23.137 13:24:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:23.137 13:24:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:23.137 13:24:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:23.137 13:24:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:23.137 13:24:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # local es=0 00:14:23.137 13:24:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:23.138 13:24:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:14:23.138 13:24:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:23.138 13:24:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:14:23.138 13:24:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:23.138 13:24:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:23.138 13:24:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.138 13:24:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:23.138 [2024-11-17 13:24:12.335349] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:23.138 [2024-11-17 13:24:12.335599] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:14:23.138 [2024-11-17 13:24:12.335615] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:23.138 request: 00:14:23.138 { 00:14:23.138 "base_bdev": "BaseBdev1", 00:14:23.138 "raid_bdev": "raid_bdev1", 00:14:23.138 "method": "bdev_raid_add_base_bdev", 00:14:23.138 "req_id": 1 00:14:23.138 } 00:14:23.138 Got JSON-RPC error response 00:14:23.138 response: 00:14:23.138 { 00:14:23.138 "code": -22, 00:14:23.138 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:14:23.138 } 00:14:23.138 13:24:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:14:23.138 13:24:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # es=1 00:14:23.138 13:24:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:23.138 13:24:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:23.138 13:24:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:23.138 13:24:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:14:24.130 13:24:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:24.131 13:24:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:24.131 13:24:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:24.131 13:24:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:24.131 13:24:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:24.131 13:24:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:24.131 13:24:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:24.131 13:24:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:24.131 13:24:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:24.131 13:24:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:24.390 13:24:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.391 13:24:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:24.391 13:24:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.391 13:24:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:24.391 13:24:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.391 13:24:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:24.391 "name": "raid_bdev1", 00:14:24.391 "uuid": "56607d2c-fac0-4727-b82e-ceb6c238f74a", 00:14:24.391 "strip_size_kb": 0, 00:14:24.391 "state": "online", 00:14:24.391 "raid_level": "raid1", 00:14:24.391 "superblock": true, 00:14:24.391 "num_base_bdevs": 4, 00:14:24.391 "num_base_bdevs_discovered": 2, 00:14:24.391 "num_base_bdevs_operational": 2, 00:14:24.391 "base_bdevs_list": [ 00:14:24.391 { 00:14:24.391 "name": null, 00:14:24.391 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:24.391 "is_configured": false, 00:14:24.391 "data_offset": 0, 00:14:24.391 "data_size": 63488 00:14:24.391 }, 00:14:24.391 { 00:14:24.391 "name": null, 00:14:24.391 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:24.391 "is_configured": false, 00:14:24.391 "data_offset": 2048, 00:14:24.391 "data_size": 63488 00:14:24.391 }, 00:14:24.391 { 00:14:24.391 "name": "BaseBdev3", 00:14:24.391 "uuid": "15aa295d-186a-5fdf-82c1-63cd2b22186d", 00:14:24.391 "is_configured": true, 00:14:24.391 "data_offset": 2048, 00:14:24.391 "data_size": 63488 00:14:24.391 }, 00:14:24.391 { 00:14:24.391 "name": "BaseBdev4", 00:14:24.391 "uuid": "5e70a0d7-dc23-57c0-bb42-031bd391bc6d", 00:14:24.391 "is_configured": true, 00:14:24.391 "data_offset": 2048, 00:14:24.391 "data_size": 63488 00:14:24.391 } 00:14:24.391 ] 00:14:24.391 }' 00:14:24.391 13:24:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:24.391 13:24:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:24.651 13:24:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:24.651 13:24:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:24.651 13:24:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:24.651 13:24:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:24.651 13:24:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:24.651 13:24:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.651 13:24:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.651 13:24:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:24.651 13:24:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:24.651 13:24:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.651 13:24:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:24.651 "name": "raid_bdev1", 00:14:24.651 "uuid": "56607d2c-fac0-4727-b82e-ceb6c238f74a", 00:14:24.651 "strip_size_kb": 0, 00:14:24.651 "state": "online", 00:14:24.651 "raid_level": "raid1", 00:14:24.651 "superblock": true, 00:14:24.651 "num_base_bdevs": 4, 00:14:24.651 "num_base_bdevs_discovered": 2, 00:14:24.651 "num_base_bdevs_operational": 2, 00:14:24.651 "base_bdevs_list": [ 00:14:24.651 { 00:14:24.651 "name": null, 00:14:24.651 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:24.651 "is_configured": false, 00:14:24.651 "data_offset": 0, 00:14:24.651 "data_size": 63488 00:14:24.651 }, 00:14:24.651 { 00:14:24.651 "name": null, 00:14:24.651 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:24.651 "is_configured": false, 00:14:24.651 "data_offset": 2048, 00:14:24.651 "data_size": 63488 00:14:24.651 }, 00:14:24.651 { 00:14:24.651 "name": "BaseBdev3", 00:14:24.651 "uuid": "15aa295d-186a-5fdf-82c1-63cd2b22186d", 00:14:24.651 "is_configured": true, 00:14:24.651 "data_offset": 2048, 00:14:24.651 "data_size": 63488 00:14:24.651 }, 00:14:24.651 { 00:14:24.651 "name": "BaseBdev4", 00:14:24.651 "uuid": "5e70a0d7-dc23-57c0-bb42-031bd391bc6d", 00:14:24.651 "is_configured": true, 00:14:24.651 "data_offset": 2048, 00:14:24.651 "data_size": 63488 00:14:24.651 } 00:14:24.651 ] 00:14:24.651 }' 00:14:24.651 13:24:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:24.911 13:24:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:24.911 13:24:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:24.911 13:24:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:24.911 13:24:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 79065 00:14:24.911 13:24:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # '[' -z 79065 ']' 00:14:24.911 13:24:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # kill -0 79065 00:14:24.911 13:24:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # uname 00:14:24.911 13:24:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:24.911 13:24:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79065 00:14:24.911 13:24:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:24.911 killing process with pid 79065 00:14:24.911 Received shutdown signal, test time was about 17.563765 seconds 00:14:24.911 00:14:24.911 Latency(us) 00:14:24.911 [2024-11-17T13:24:14.135Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:24.911 [2024-11-17T13:24:14.135Z] =================================================================================================================== 00:14:24.911 [2024-11-17T13:24:14.135Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:24.911 13:24:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:24.911 13:24:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79065' 00:14:24.911 13:24:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@973 -- # kill 79065 00:14:24.911 [2024-11-17 13:24:13.980642] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:24.911 [2024-11-17 13:24:13.980774] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:24.911 13:24:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@978 -- # wait 79065 00:14:24.911 [2024-11-17 13:24:13.980844] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:24.911 [2024-11-17 13:24:13.980853] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:14:25.171 [2024-11-17 13:24:14.380938] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:26.554 13:24:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:14:26.554 00:14:26.554 real 0m20.896s 00:14:26.554 user 0m27.142s 00:14:26.554 sys 0m2.512s 00:14:26.554 ************************************ 00:14:26.554 END TEST raid_rebuild_test_sb_io 00:14:26.554 ************************************ 00:14:26.554 13:24:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:26.554 13:24:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:26.554 13:24:15 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:14:26.554 13:24:15 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 3 false 00:14:26.554 13:24:15 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:14:26.554 13:24:15 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:26.554 13:24:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:26.554 ************************************ 00:14:26.554 START TEST raid5f_state_function_test 00:14:26.554 ************************************ 00:14:26.554 13:24:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 3 false 00:14:26.554 13:24:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:14:26.554 13:24:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:14:26.554 13:24:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:14:26.554 13:24:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:14:26.554 13:24:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:14:26.554 13:24:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:26.554 13:24:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:14:26.554 13:24:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:26.554 13:24:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:26.554 13:24:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:14:26.554 13:24:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:26.554 13:24:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:26.554 13:24:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:14:26.554 13:24:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:26.554 13:24:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:26.554 13:24:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:14:26.554 13:24:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:14:26.554 13:24:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:14:26.554 13:24:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:14:26.555 13:24:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:14:26.555 13:24:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:14:26.555 13:24:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:14:26.555 13:24:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:14:26.555 13:24:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:14:26.555 13:24:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:14:26.555 13:24:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:14:26.555 13:24:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=79787 00:14:26.555 13:24:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:26.555 13:24:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 79787' 00:14:26.555 Process raid pid: 79787 00:14:26.555 13:24:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 79787 00:14:26.555 13:24:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 79787 ']' 00:14:26.555 13:24:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:26.555 13:24:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:26.555 13:24:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:26.555 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:26.555 13:24:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:26.555 13:24:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.555 [2024-11-17 13:24:15.663037] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:14:26.555 [2024-11-17 13:24:15.663231] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:26.815 [2024-11-17 13:24:15.834216] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:26.815 [2024-11-17 13:24:15.941975] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:27.075 [2024-11-17 13:24:16.146452] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:27.075 [2024-11-17 13:24:16.146487] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:27.335 13:24:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:27.335 13:24:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:14:27.335 13:24:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:27.335 13:24:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.335 13:24:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.335 [2024-11-17 13:24:16.511029] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:27.335 [2024-11-17 13:24:16.511083] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:27.335 [2024-11-17 13:24:16.511094] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:27.335 [2024-11-17 13:24:16.511103] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:27.335 [2024-11-17 13:24:16.511114] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:27.335 [2024-11-17 13:24:16.511123] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:27.335 13:24:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.335 13:24:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:27.335 13:24:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:27.335 13:24:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:27.335 13:24:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:27.335 13:24:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:27.335 13:24:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:27.335 13:24:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:27.335 13:24:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:27.335 13:24:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:27.335 13:24:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:27.335 13:24:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:27.335 13:24:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:27.335 13:24:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.335 13:24:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.335 13:24:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.595 13:24:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:27.595 "name": "Existed_Raid", 00:14:27.595 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:27.595 "strip_size_kb": 64, 00:14:27.595 "state": "configuring", 00:14:27.595 "raid_level": "raid5f", 00:14:27.595 "superblock": false, 00:14:27.595 "num_base_bdevs": 3, 00:14:27.595 "num_base_bdevs_discovered": 0, 00:14:27.595 "num_base_bdevs_operational": 3, 00:14:27.595 "base_bdevs_list": [ 00:14:27.595 { 00:14:27.595 "name": "BaseBdev1", 00:14:27.595 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:27.595 "is_configured": false, 00:14:27.595 "data_offset": 0, 00:14:27.595 "data_size": 0 00:14:27.595 }, 00:14:27.595 { 00:14:27.595 "name": "BaseBdev2", 00:14:27.595 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:27.595 "is_configured": false, 00:14:27.595 "data_offset": 0, 00:14:27.595 "data_size": 0 00:14:27.595 }, 00:14:27.595 { 00:14:27.595 "name": "BaseBdev3", 00:14:27.595 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:27.595 "is_configured": false, 00:14:27.595 "data_offset": 0, 00:14:27.595 "data_size": 0 00:14:27.595 } 00:14:27.595 ] 00:14:27.595 }' 00:14:27.595 13:24:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:27.595 13:24:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.855 13:24:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:27.855 13:24:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.855 13:24:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.855 [2024-11-17 13:24:16.946283] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:27.855 [2024-11-17 13:24:16.946366] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:14:27.855 13:24:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.855 13:24:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:27.855 13:24:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.855 13:24:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.855 [2024-11-17 13:24:16.958253] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:27.855 [2024-11-17 13:24:16.958344] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:27.855 [2024-11-17 13:24:16.958371] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:27.855 [2024-11-17 13:24:16.958392] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:27.855 [2024-11-17 13:24:16.958410] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:27.855 [2024-11-17 13:24:16.958430] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:27.855 13:24:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.855 13:24:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:27.855 13:24:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.855 13:24:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.855 [2024-11-17 13:24:17.004270] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:27.855 BaseBdev1 00:14:27.855 13:24:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.855 13:24:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:14:27.855 13:24:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:27.855 13:24:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:27.855 13:24:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:27.855 13:24:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:27.855 13:24:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:27.855 13:24:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:27.855 13:24:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.855 13:24:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.855 13:24:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.856 13:24:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:27.856 13:24:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.856 13:24:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.856 [ 00:14:27.856 { 00:14:27.856 "name": "BaseBdev1", 00:14:27.856 "aliases": [ 00:14:27.856 "61214547-013e-44ae-af4c-0b2ffbfbcc24" 00:14:27.856 ], 00:14:27.856 "product_name": "Malloc disk", 00:14:27.856 "block_size": 512, 00:14:27.856 "num_blocks": 65536, 00:14:27.856 "uuid": "61214547-013e-44ae-af4c-0b2ffbfbcc24", 00:14:27.856 "assigned_rate_limits": { 00:14:27.856 "rw_ios_per_sec": 0, 00:14:27.856 "rw_mbytes_per_sec": 0, 00:14:27.856 "r_mbytes_per_sec": 0, 00:14:27.856 "w_mbytes_per_sec": 0 00:14:27.856 }, 00:14:27.856 "claimed": true, 00:14:27.856 "claim_type": "exclusive_write", 00:14:27.856 "zoned": false, 00:14:27.856 "supported_io_types": { 00:14:27.856 "read": true, 00:14:27.856 "write": true, 00:14:27.856 "unmap": true, 00:14:27.856 "flush": true, 00:14:27.856 "reset": true, 00:14:27.856 "nvme_admin": false, 00:14:27.856 "nvme_io": false, 00:14:27.856 "nvme_io_md": false, 00:14:27.856 "write_zeroes": true, 00:14:27.856 "zcopy": true, 00:14:27.856 "get_zone_info": false, 00:14:27.856 "zone_management": false, 00:14:27.856 "zone_append": false, 00:14:27.856 "compare": false, 00:14:27.856 "compare_and_write": false, 00:14:27.856 "abort": true, 00:14:27.856 "seek_hole": false, 00:14:27.856 "seek_data": false, 00:14:27.856 "copy": true, 00:14:27.856 "nvme_iov_md": false 00:14:27.856 }, 00:14:27.856 "memory_domains": [ 00:14:27.856 { 00:14:27.856 "dma_device_id": "system", 00:14:27.856 "dma_device_type": 1 00:14:27.856 }, 00:14:27.856 { 00:14:27.856 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:27.856 "dma_device_type": 2 00:14:27.856 } 00:14:27.856 ], 00:14:27.856 "driver_specific": {} 00:14:27.856 } 00:14:27.856 ] 00:14:27.856 13:24:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.856 13:24:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:27.856 13:24:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:27.856 13:24:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:27.856 13:24:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:27.856 13:24:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:27.856 13:24:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:27.856 13:24:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:27.856 13:24:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:27.856 13:24:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:27.856 13:24:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:27.856 13:24:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:27.856 13:24:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:27.856 13:24:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:27.856 13:24:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.856 13:24:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.856 13:24:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.116 13:24:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:28.116 "name": "Existed_Raid", 00:14:28.116 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:28.116 "strip_size_kb": 64, 00:14:28.116 "state": "configuring", 00:14:28.116 "raid_level": "raid5f", 00:14:28.116 "superblock": false, 00:14:28.116 "num_base_bdevs": 3, 00:14:28.116 "num_base_bdevs_discovered": 1, 00:14:28.116 "num_base_bdevs_operational": 3, 00:14:28.116 "base_bdevs_list": [ 00:14:28.116 { 00:14:28.116 "name": "BaseBdev1", 00:14:28.116 "uuid": "61214547-013e-44ae-af4c-0b2ffbfbcc24", 00:14:28.116 "is_configured": true, 00:14:28.116 "data_offset": 0, 00:14:28.116 "data_size": 65536 00:14:28.116 }, 00:14:28.116 { 00:14:28.116 "name": "BaseBdev2", 00:14:28.116 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:28.116 "is_configured": false, 00:14:28.116 "data_offset": 0, 00:14:28.116 "data_size": 0 00:14:28.116 }, 00:14:28.116 { 00:14:28.116 "name": "BaseBdev3", 00:14:28.116 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:28.116 "is_configured": false, 00:14:28.116 "data_offset": 0, 00:14:28.116 "data_size": 0 00:14:28.116 } 00:14:28.116 ] 00:14:28.116 }' 00:14:28.116 13:24:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:28.116 13:24:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.378 13:24:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:28.378 13:24:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.378 13:24:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.378 [2024-11-17 13:24:17.423554] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:28.378 [2024-11-17 13:24:17.423648] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:14:28.378 13:24:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.378 13:24:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:28.378 13:24:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.378 13:24:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.378 [2024-11-17 13:24:17.435590] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:28.378 [2024-11-17 13:24:17.437412] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:28.378 [2024-11-17 13:24:17.437508] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:28.378 [2024-11-17 13:24:17.437536] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:28.378 [2024-11-17 13:24:17.437557] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:28.378 13:24:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.378 13:24:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:14:28.378 13:24:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:28.378 13:24:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:28.378 13:24:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:28.378 13:24:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:28.378 13:24:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:28.378 13:24:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:28.378 13:24:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:28.378 13:24:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:28.378 13:24:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:28.379 13:24:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:28.379 13:24:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:28.379 13:24:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:28.379 13:24:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:28.379 13:24:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.379 13:24:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.379 13:24:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.379 13:24:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:28.379 "name": "Existed_Raid", 00:14:28.379 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:28.379 "strip_size_kb": 64, 00:14:28.379 "state": "configuring", 00:14:28.379 "raid_level": "raid5f", 00:14:28.379 "superblock": false, 00:14:28.379 "num_base_bdevs": 3, 00:14:28.379 "num_base_bdevs_discovered": 1, 00:14:28.379 "num_base_bdevs_operational": 3, 00:14:28.379 "base_bdevs_list": [ 00:14:28.379 { 00:14:28.379 "name": "BaseBdev1", 00:14:28.379 "uuid": "61214547-013e-44ae-af4c-0b2ffbfbcc24", 00:14:28.379 "is_configured": true, 00:14:28.379 "data_offset": 0, 00:14:28.379 "data_size": 65536 00:14:28.379 }, 00:14:28.379 { 00:14:28.379 "name": "BaseBdev2", 00:14:28.379 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:28.379 "is_configured": false, 00:14:28.379 "data_offset": 0, 00:14:28.379 "data_size": 0 00:14:28.379 }, 00:14:28.379 { 00:14:28.379 "name": "BaseBdev3", 00:14:28.379 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:28.379 "is_configured": false, 00:14:28.379 "data_offset": 0, 00:14:28.379 "data_size": 0 00:14:28.379 } 00:14:28.379 ] 00:14:28.379 }' 00:14:28.379 13:24:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:28.379 13:24:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.639 13:24:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:28.639 13:24:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.639 13:24:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.639 [2024-11-17 13:24:17.857725] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:28.639 BaseBdev2 00:14:28.639 13:24:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.639 13:24:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:14:28.639 13:24:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:28.639 13:24:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:28.639 13:24:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:28.639 13:24:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:28.639 13:24:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:28.639 13:24:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:28.639 13:24:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.639 13:24:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.899 13:24:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.899 13:24:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:28.899 13:24:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.900 13:24:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.900 [ 00:14:28.900 { 00:14:28.900 "name": "BaseBdev2", 00:14:28.900 "aliases": [ 00:14:28.900 "b1c74eab-5fc8-4000-bcb0-7444bc3effeb" 00:14:28.900 ], 00:14:28.900 "product_name": "Malloc disk", 00:14:28.900 "block_size": 512, 00:14:28.900 "num_blocks": 65536, 00:14:28.900 "uuid": "b1c74eab-5fc8-4000-bcb0-7444bc3effeb", 00:14:28.900 "assigned_rate_limits": { 00:14:28.900 "rw_ios_per_sec": 0, 00:14:28.900 "rw_mbytes_per_sec": 0, 00:14:28.900 "r_mbytes_per_sec": 0, 00:14:28.900 "w_mbytes_per_sec": 0 00:14:28.900 }, 00:14:28.900 "claimed": true, 00:14:28.900 "claim_type": "exclusive_write", 00:14:28.900 "zoned": false, 00:14:28.900 "supported_io_types": { 00:14:28.900 "read": true, 00:14:28.900 "write": true, 00:14:28.900 "unmap": true, 00:14:28.900 "flush": true, 00:14:28.900 "reset": true, 00:14:28.900 "nvme_admin": false, 00:14:28.900 "nvme_io": false, 00:14:28.900 "nvme_io_md": false, 00:14:28.900 "write_zeroes": true, 00:14:28.900 "zcopy": true, 00:14:28.900 "get_zone_info": false, 00:14:28.900 "zone_management": false, 00:14:28.900 "zone_append": false, 00:14:28.900 "compare": false, 00:14:28.900 "compare_and_write": false, 00:14:28.900 "abort": true, 00:14:28.900 "seek_hole": false, 00:14:28.900 "seek_data": false, 00:14:28.900 "copy": true, 00:14:28.900 "nvme_iov_md": false 00:14:28.900 }, 00:14:28.900 "memory_domains": [ 00:14:28.900 { 00:14:28.900 "dma_device_id": "system", 00:14:28.900 "dma_device_type": 1 00:14:28.900 }, 00:14:28.900 { 00:14:28.900 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:28.900 "dma_device_type": 2 00:14:28.900 } 00:14:28.900 ], 00:14:28.900 "driver_specific": {} 00:14:28.900 } 00:14:28.900 ] 00:14:28.900 13:24:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.900 13:24:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:28.900 13:24:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:28.900 13:24:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:28.900 13:24:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:28.900 13:24:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:28.900 13:24:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:28.900 13:24:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:28.900 13:24:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:28.900 13:24:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:28.900 13:24:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:28.900 13:24:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:28.900 13:24:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:28.900 13:24:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:28.900 13:24:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:28.900 13:24:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:28.900 13:24:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.900 13:24:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.900 13:24:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.900 13:24:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:28.900 "name": "Existed_Raid", 00:14:28.900 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:28.900 "strip_size_kb": 64, 00:14:28.900 "state": "configuring", 00:14:28.900 "raid_level": "raid5f", 00:14:28.900 "superblock": false, 00:14:28.900 "num_base_bdevs": 3, 00:14:28.900 "num_base_bdevs_discovered": 2, 00:14:28.900 "num_base_bdevs_operational": 3, 00:14:28.900 "base_bdevs_list": [ 00:14:28.900 { 00:14:28.900 "name": "BaseBdev1", 00:14:28.900 "uuid": "61214547-013e-44ae-af4c-0b2ffbfbcc24", 00:14:28.900 "is_configured": true, 00:14:28.900 "data_offset": 0, 00:14:28.900 "data_size": 65536 00:14:28.900 }, 00:14:28.900 { 00:14:28.900 "name": "BaseBdev2", 00:14:28.900 "uuid": "b1c74eab-5fc8-4000-bcb0-7444bc3effeb", 00:14:28.900 "is_configured": true, 00:14:28.900 "data_offset": 0, 00:14:28.900 "data_size": 65536 00:14:28.900 }, 00:14:28.900 { 00:14:28.900 "name": "BaseBdev3", 00:14:28.900 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:28.900 "is_configured": false, 00:14:28.900 "data_offset": 0, 00:14:28.900 "data_size": 0 00:14:28.900 } 00:14:28.900 ] 00:14:28.900 }' 00:14:28.900 13:24:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:28.900 13:24:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.161 13:24:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:29.161 13:24:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.161 13:24:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.161 [2024-11-17 13:24:18.365982] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:29.161 [2024-11-17 13:24:18.366112] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:29.161 [2024-11-17 13:24:18.366143] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:14:29.161 [2024-11-17 13:24:18.366486] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:29.161 [2024-11-17 13:24:18.371737] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:29.161 [2024-11-17 13:24:18.371790] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:14:29.161 [2024-11-17 13:24:18.372141] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:29.161 BaseBdev3 00:14:29.161 13:24:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.161 13:24:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:14:29.161 13:24:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:29.161 13:24:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:29.161 13:24:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:29.161 13:24:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:29.161 13:24:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:29.161 13:24:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:29.161 13:24:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.161 13:24:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.421 13:24:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.421 13:24:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:29.421 13:24:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.421 13:24:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.421 [ 00:14:29.421 { 00:14:29.421 "name": "BaseBdev3", 00:14:29.421 "aliases": [ 00:14:29.421 "f501a0d6-0246-4e1d-8fd5-15fefdc7b07e" 00:14:29.421 ], 00:14:29.421 "product_name": "Malloc disk", 00:14:29.421 "block_size": 512, 00:14:29.421 "num_blocks": 65536, 00:14:29.421 "uuid": "f501a0d6-0246-4e1d-8fd5-15fefdc7b07e", 00:14:29.421 "assigned_rate_limits": { 00:14:29.421 "rw_ios_per_sec": 0, 00:14:29.421 "rw_mbytes_per_sec": 0, 00:14:29.421 "r_mbytes_per_sec": 0, 00:14:29.421 "w_mbytes_per_sec": 0 00:14:29.421 }, 00:14:29.421 "claimed": true, 00:14:29.421 "claim_type": "exclusive_write", 00:14:29.421 "zoned": false, 00:14:29.421 "supported_io_types": { 00:14:29.421 "read": true, 00:14:29.421 "write": true, 00:14:29.421 "unmap": true, 00:14:29.421 "flush": true, 00:14:29.421 "reset": true, 00:14:29.421 "nvme_admin": false, 00:14:29.421 "nvme_io": false, 00:14:29.421 "nvme_io_md": false, 00:14:29.421 "write_zeroes": true, 00:14:29.421 "zcopy": true, 00:14:29.421 "get_zone_info": false, 00:14:29.421 "zone_management": false, 00:14:29.421 "zone_append": false, 00:14:29.421 "compare": false, 00:14:29.421 "compare_and_write": false, 00:14:29.421 "abort": true, 00:14:29.421 "seek_hole": false, 00:14:29.421 "seek_data": false, 00:14:29.421 "copy": true, 00:14:29.421 "nvme_iov_md": false 00:14:29.421 }, 00:14:29.421 "memory_domains": [ 00:14:29.421 { 00:14:29.421 "dma_device_id": "system", 00:14:29.421 "dma_device_type": 1 00:14:29.421 }, 00:14:29.421 { 00:14:29.421 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:29.421 "dma_device_type": 2 00:14:29.421 } 00:14:29.421 ], 00:14:29.421 "driver_specific": {} 00:14:29.421 } 00:14:29.421 ] 00:14:29.421 13:24:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.421 13:24:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:29.421 13:24:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:29.421 13:24:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:29.421 13:24:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:14:29.421 13:24:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:29.421 13:24:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:29.421 13:24:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:29.421 13:24:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:29.421 13:24:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:29.421 13:24:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:29.421 13:24:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:29.421 13:24:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:29.421 13:24:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:29.421 13:24:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:29.421 13:24:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:29.421 13:24:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.421 13:24:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.421 13:24:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.421 13:24:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:29.421 "name": "Existed_Raid", 00:14:29.421 "uuid": "e4008a4a-8fa1-4fd9-bf80-a5318c87ce73", 00:14:29.421 "strip_size_kb": 64, 00:14:29.421 "state": "online", 00:14:29.421 "raid_level": "raid5f", 00:14:29.421 "superblock": false, 00:14:29.421 "num_base_bdevs": 3, 00:14:29.421 "num_base_bdevs_discovered": 3, 00:14:29.421 "num_base_bdevs_operational": 3, 00:14:29.421 "base_bdevs_list": [ 00:14:29.421 { 00:14:29.421 "name": "BaseBdev1", 00:14:29.421 "uuid": "61214547-013e-44ae-af4c-0b2ffbfbcc24", 00:14:29.421 "is_configured": true, 00:14:29.421 "data_offset": 0, 00:14:29.421 "data_size": 65536 00:14:29.421 }, 00:14:29.421 { 00:14:29.421 "name": "BaseBdev2", 00:14:29.421 "uuid": "b1c74eab-5fc8-4000-bcb0-7444bc3effeb", 00:14:29.421 "is_configured": true, 00:14:29.421 "data_offset": 0, 00:14:29.421 "data_size": 65536 00:14:29.421 }, 00:14:29.421 { 00:14:29.421 "name": "BaseBdev3", 00:14:29.421 "uuid": "f501a0d6-0246-4e1d-8fd5-15fefdc7b07e", 00:14:29.421 "is_configured": true, 00:14:29.421 "data_offset": 0, 00:14:29.421 "data_size": 65536 00:14:29.421 } 00:14:29.421 ] 00:14:29.421 }' 00:14:29.421 13:24:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:29.421 13:24:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.682 13:24:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:14:29.682 13:24:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:29.682 13:24:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:29.682 13:24:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:29.682 13:24:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:29.682 13:24:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:29.682 13:24:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:29.682 13:24:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:29.682 13:24:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.682 13:24:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.682 [2024-11-17 13:24:18.853705] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:29.682 13:24:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.682 13:24:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:29.682 "name": "Existed_Raid", 00:14:29.682 "aliases": [ 00:14:29.682 "e4008a4a-8fa1-4fd9-bf80-a5318c87ce73" 00:14:29.682 ], 00:14:29.682 "product_name": "Raid Volume", 00:14:29.682 "block_size": 512, 00:14:29.682 "num_blocks": 131072, 00:14:29.682 "uuid": "e4008a4a-8fa1-4fd9-bf80-a5318c87ce73", 00:14:29.682 "assigned_rate_limits": { 00:14:29.682 "rw_ios_per_sec": 0, 00:14:29.682 "rw_mbytes_per_sec": 0, 00:14:29.682 "r_mbytes_per_sec": 0, 00:14:29.682 "w_mbytes_per_sec": 0 00:14:29.682 }, 00:14:29.682 "claimed": false, 00:14:29.682 "zoned": false, 00:14:29.682 "supported_io_types": { 00:14:29.682 "read": true, 00:14:29.682 "write": true, 00:14:29.682 "unmap": false, 00:14:29.682 "flush": false, 00:14:29.682 "reset": true, 00:14:29.682 "nvme_admin": false, 00:14:29.682 "nvme_io": false, 00:14:29.682 "nvme_io_md": false, 00:14:29.682 "write_zeroes": true, 00:14:29.682 "zcopy": false, 00:14:29.682 "get_zone_info": false, 00:14:29.682 "zone_management": false, 00:14:29.682 "zone_append": false, 00:14:29.682 "compare": false, 00:14:29.682 "compare_and_write": false, 00:14:29.682 "abort": false, 00:14:29.682 "seek_hole": false, 00:14:29.682 "seek_data": false, 00:14:29.682 "copy": false, 00:14:29.682 "nvme_iov_md": false 00:14:29.682 }, 00:14:29.682 "driver_specific": { 00:14:29.682 "raid": { 00:14:29.682 "uuid": "e4008a4a-8fa1-4fd9-bf80-a5318c87ce73", 00:14:29.682 "strip_size_kb": 64, 00:14:29.682 "state": "online", 00:14:29.682 "raid_level": "raid5f", 00:14:29.682 "superblock": false, 00:14:29.682 "num_base_bdevs": 3, 00:14:29.682 "num_base_bdevs_discovered": 3, 00:14:29.682 "num_base_bdevs_operational": 3, 00:14:29.682 "base_bdevs_list": [ 00:14:29.682 { 00:14:29.682 "name": "BaseBdev1", 00:14:29.682 "uuid": "61214547-013e-44ae-af4c-0b2ffbfbcc24", 00:14:29.682 "is_configured": true, 00:14:29.682 "data_offset": 0, 00:14:29.682 "data_size": 65536 00:14:29.682 }, 00:14:29.682 { 00:14:29.682 "name": "BaseBdev2", 00:14:29.682 "uuid": "b1c74eab-5fc8-4000-bcb0-7444bc3effeb", 00:14:29.682 "is_configured": true, 00:14:29.682 "data_offset": 0, 00:14:29.683 "data_size": 65536 00:14:29.683 }, 00:14:29.683 { 00:14:29.683 "name": "BaseBdev3", 00:14:29.683 "uuid": "f501a0d6-0246-4e1d-8fd5-15fefdc7b07e", 00:14:29.683 "is_configured": true, 00:14:29.683 "data_offset": 0, 00:14:29.683 "data_size": 65536 00:14:29.683 } 00:14:29.683 ] 00:14:29.683 } 00:14:29.683 } 00:14:29.683 }' 00:14:29.683 13:24:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:29.943 13:24:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:14:29.943 BaseBdev2 00:14:29.943 BaseBdev3' 00:14:29.943 13:24:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:29.943 13:24:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:29.943 13:24:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:29.943 13:24:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:29.943 13:24:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:14:29.943 13:24:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.943 13:24:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.943 13:24:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.943 13:24:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:29.943 13:24:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:29.943 13:24:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:29.943 13:24:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:29.943 13:24:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:29.943 13:24:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.943 13:24:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.943 13:24:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.943 13:24:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:29.943 13:24:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:29.943 13:24:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:29.943 13:24:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:29.943 13:24:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:29.943 13:24:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.943 13:24:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.943 13:24:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.943 13:24:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:29.943 13:24:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:29.943 13:24:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:29.943 13:24:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.943 13:24:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.943 [2024-11-17 13:24:19.093171] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:30.205 13:24:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.205 13:24:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:14:30.205 13:24:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:14:30.205 13:24:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:30.205 13:24:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:14:30.205 13:24:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:14:30.205 13:24:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:14:30.205 13:24:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:30.205 13:24:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:30.205 13:24:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:30.205 13:24:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:30.205 13:24:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:30.205 13:24:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:30.205 13:24:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:30.205 13:24:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:30.205 13:24:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:30.205 13:24:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:30.205 13:24:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:30.205 13:24:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.205 13:24:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.205 13:24:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.205 13:24:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:30.205 "name": "Existed_Raid", 00:14:30.205 "uuid": "e4008a4a-8fa1-4fd9-bf80-a5318c87ce73", 00:14:30.205 "strip_size_kb": 64, 00:14:30.205 "state": "online", 00:14:30.205 "raid_level": "raid5f", 00:14:30.205 "superblock": false, 00:14:30.205 "num_base_bdevs": 3, 00:14:30.205 "num_base_bdevs_discovered": 2, 00:14:30.205 "num_base_bdevs_operational": 2, 00:14:30.205 "base_bdevs_list": [ 00:14:30.205 { 00:14:30.205 "name": null, 00:14:30.205 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:30.205 "is_configured": false, 00:14:30.205 "data_offset": 0, 00:14:30.205 "data_size": 65536 00:14:30.205 }, 00:14:30.205 { 00:14:30.205 "name": "BaseBdev2", 00:14:30.205 "uuid": "b1c74eab-5fc8-4000-bcb0-7444bc3effeb", 00:14:30.205 "is_configured": true, 00:14:30.205 "data_offset": 0, 00:14:30.205 "data_size": 65536 00:14:30.205 }, 00:14:30.205 { 00:14:30.205 "name": "BaseBdev3", 00:14:30.205 "uuid": "f501a0d6-0246-4e1d-8fd5-15fefdc7b07e", 00:14:30.205 "is_configured": true, 00:14:30.205 "data_offset": 0, 00:14:30.205 "data_size": 65536 00:14:30.205 } 00:14:30.205 ] 00:14:30.205 }' 00:14:30.205 13:24:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:30.205 13:24:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.467 13:24:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:14:30.467 13:24:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:30.467 13:24:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:30.467 13:24:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:30.467 13:24:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.467 13:24:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.467 13:24:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.467 13:24:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:30.467 13:24:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:30.467 13:24:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:14:30.467 13:24:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.467 13:24:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.467 [2024-11-17 13:24:19.659367] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:30.467 [2024-11-17 13:24:19.659521] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:30.727 [2024-11-17 13:24:19.753576] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:30.727 13:24:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.727 13:24:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:30.727 13:24:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:30.727 13:24:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:30.727 13:24:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.727 13:24:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:30.727 13:24:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.727 13:24:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.727 13:24:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:30.727 13:24:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:30.727 13:24:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:14:30.727 13:24:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.727 13:24:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.727 [2024-11-17 13:24:19.809510] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:30.727 [2024-11-17 13:24:19.809557] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:14:30.727 13:24:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.727 13:24:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:30.727 13:24:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:30.727 13:24:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:30.727 13:24:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.727 13:24:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.727 13:24:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:14:30.727 13:24:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.987 13:24:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:14:30.987 13:24:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:14:30.987 13:24:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:14:30.987 13:24:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:14:30.987 13:24:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:30.987 13:24:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:30.987 13:24:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.987 13:24:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.987 BaseBdev2 00:14:30.987 13:24:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.987 13:24:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:14:30.987 13:24:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:30.987 13:24:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:30.987 13:24:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:30.987 13:24:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:30.987 13:24:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:30.987 13:24:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:30.987 13:24:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.987 13:24:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.987 13:24:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.987 13:24:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:30.987 13:24:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.987 13:24:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.987 [ 00:14:30.987 { 00:14:30.987 "name": "BaseBdev2", 00:14:30.987 "aliases": [ 00:14:30.987 "4a6484bb-3ec2-4925-a359-61b7154ddf1a" 00:14:30.987 ], 00:14:30.987 "product_name": "Malloc disk", 00:14:30.987 "block_size": 512, 00:14:30.987 "num_blocks": 65536, 00:14:30.987 "uuid": "4a6484bb-3ec2-4925-a359-61b7154ddf1a", 00:14:30.987 "assigned_rate_limits": { 00:14:30.987 "rw_ios_per_sec": 0, 00:14:30.987 "rw_mbytes_per_sec": 0, 00:14:30.987 "r_mbytes_per_sec": 0, 00:14:30.987 "w_mbytes_per_sec": 0 00:14:30.987 }, 00:14:30.987 "claimed": false, 00:14:30.987 "zoned": false, 00:14:30.987 "supported_io_types": { 00:14:30.987 "read": true, 00:14:30.987 "write": true, 00:14:30.987 "unmap": true, 00:14:30.987 "flush": true, 00:14:30.987 "reset": true, 00:14:30.987 "nvme_admin": false, 00:14:30.987 "nvme_io": false, 00:14:30.987 "nvme_io_md": false, 00:14:30.987 "write_zeroes": true, 00:14:30.987 "zcopy": true, 00:14:30.987 "get_zone_info": false, 00:14:30.987 "zone_management": false, 00:14:30.987 "zone_append": false, 00:14:30.987 "compare": false, 00:14:30.987 "compare_and_write": false, 00:14:30.987 "abort": true, 00:14:30.987 "seek_hole": false, 00:14:30.987 "seek_data": false, 00:14:30.987 "copy": true, 00:14:30.987 "nvme_iov_md": false 00:14:30.987 }, 00:14:30.987 "memory_domains": [ 00:14:30.987 { 00:14:30.987 "dma_device_id": "system", 00:14:30.987 "dma_device_type": 1 00:14:30.987 }, 00:14:30.987 { 00:14:30.987 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:30.987 "dma_device_type": 2 00:14:30.987 } 00:14:30.987 ], 00:14:30.987 "driver_specific": {} 00:14:30.987 } 00:14:30.987 ] 00:14:30.987 13:24:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.987 13:24:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:30.987 13:24:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:30.987 13:24:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:30.987 13:24:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:30.987 13:24:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.987 13:24:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.987 BaseBdev3 00:14:30.987 13:24:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.987 13:24:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:14:30.987 13:24:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:30.987 13:24:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:30.987 13:24:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:30.987 13:24:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:30.987 13:24:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:30.987 13:24:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:30.987 13:24:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.987 13:24:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.987 13:24:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.987 13:24:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:30.988 13:24:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.988 13:24:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.988 [ 00:14:30.988 { 00:14:30.988 "name": "BaseBdev3", 00:14:30.988 "aliases": [ 00:14:30.988 "40df8451-32f6-4e94-822c-cdcdca5ff64e" 00:14:30.988 ], 00:14:30.988 "product_name": "Malloc disk", 00:14:30.988 "block_size": 512, 00:14:30.988 "num_blocks": 65536, 00:14:30.988 "uuid": "40df8451-32f6-4e94-822c-cdcdca5ff64e", 00:14:30.988 "assigned_rate_limits": { 00:14:30.988 "rw_ios_per_sec": 0, 00:14:30.988 "rw_mbytes_per_sec": 0, 00:14:30.988 "r_mbytes_per_sec": 0, 00:14:30.988 "w_mbytes_per_sec": 0 00:14:30.988 }, 00:14:30.988 "claimed": false, 00:14:30.988 "zoned": false, 00:14:30.988 "supported_io_types": { 00:14:30.988 "read": true, 00:14:30.988 "write": true, 00:14:30.988 "unmap": true, 00:14:30.988 "flush": true, 00:14:30.988 "reset": true, 00:14:30.988 "nvme_admin": false, 00:14:30.988 "nvme_io": false, 00:14:30.988 "nvme_io_md": false, 00:14:30.988 "write_zeroes": true, 00:14:30.988 "zcopy": true, 00:14:30.988 "get_zone_info": false, 00:14:30.988 "zone_management": false, 00:14:30.988 "zone_append": false, 00:14:30.988 "compare": false, 00:14:30.988 "compare_and_write": false, 00:14:30.988 "abort": true, 00:14:30.988 "seek_hole": false, 00:14:30.988 "seek_data": false, 00:14:30.988 "copy": true, 00:14:30.988 "nvme_iov_md": false 00:14:30.988 }, 00:14:30.988 "memory_domains": [ 00:14:30.988 { 00:14:30.988 "dma_device_id": "system", 00:14:30.988 "dma_device_type": 1 00:14:30.988 }, 00:14:30.988 { 00:14:30.988 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:30.988 "dma_device_type": 2 00:14:30.988 } 00:14:30.988 ], 00:14:30.988 "driver_specific": {} 00:14:30.988 } 00:14:30.988 ] 00:14:30.988 13:24:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.988 13:24:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:30.988 13:24:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:30.988 13:24:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:30.988 13:24:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:30.988 13:24:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.988 13:24:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.988 [2024-11-17 13:24:20.115773] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:30.988 [2024-11-17 13:24:20.115817] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:30.988 [2024-11-17 13:24:20.115853] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:30.988 [2024-11-17 13:24:20.117600] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:30.988 13:24:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.988 13:24:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:30.988 13:24:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:30.988 13:24:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:30.988 13:24:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:30.988 13:24:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:30.988 13:24:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:30.988 13:24:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:30.988 13:24:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:30.988 13:24:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:30.988 13:24:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:30.988 13:24:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:30.988 13:24:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:30.988 13:24:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.988 13:24:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.988 13:24:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.988 13:24:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:30.988 "name": "Existed_Raid", 00:14:30.988 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:30.988 "strip_size_kb": 64, 00:14:30.988 "state": "configuring", 00:14:30.988 "raid_level": "raid5f", 00:14:30.988 "superblock": false, 00:14:30.988 "num_base_bdevs": 3, 00:14:30.988 "num_base_bdevs_discovered": 2, 00:14:30.988 "num_base_bdevs_operational": 3, 00:14:30.988 "base_bdevs_list": [ 00:14:30.988 { 00:14:30.988 "name": "BaseBdev1", 00:14:30.988 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:30.988 "is_configured": false, 00:14:30.988 "data_offset": 0, 00:14:30.988 "data_size": 0 00:14:30.988 }, 00:14:30.988 { 00:14:30.988 "name": "BaseBdev2", 00:14:30.988 "uuid": "4a6484bb-3ec2-4925-a359-61b7154ddf1a", 00:14:30.988 "is_configured": true, 00:14:30.988 "data_offset": 0, 00:14:30.988 "data_size": 65536 00:14:30.988 }, 00:14:30.988 { 00:14:30.988 "name": "BaseBdev3", 00:14:30.988 "uuid": "40df8451-32f6-4e94-822c-cdcdca5ff64e", 00:14:30.988 "is_configured": true, 00:14:30.988 "data_offset": 0, 00:14:30.988 "data_size": 65536 00:14:30.988 } 00:14:30.988 ] 00:14:30.988 }' 00:14:30.988 13:24:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:30.988 13:24:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.558 13:24:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:31.558 13:24:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.558 13:24:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.558 [2024-11-17 13:24:20.543039] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:31.558 13:24:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.558 13:24:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:31.558 13:24:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:31.558 13:24:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:31.558 13:24:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:31.558 13:24:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:31.558 13:24:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:31.558 13:24:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:31.558 13:24:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:31.558 13:24:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:31.558 13:24:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:31.558 13:24:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.558 13:24:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.558 13:24:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.558 13:24:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:31.558 13:24:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.558 13:24:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:31.558 "name": "Existed_Raid", 00:14:31.558 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:31.558 "strip_size_kb": 64, 00:14:31.558 "state": "configuring", 00:14:31.558 "raid_level": "raid5f", 00:14:31.558 "superblock": false, 00:14:31.558 "num_base_bdevs": 3, 00:14:31.558 "num_base_bdevs_discovered": 1, 00:14:31.558 "num_base_bdevs_operational": 3, 00:14:31.558 "base_bdevs_list": [ 00:14:31.558 { 00:14:31.558 "name": "BaseBdev1", 00:14:31.558 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:31.558 "is_configured": false, 00:14:31.558 "data_offset": 0, 00:14:31.558 "data_size": 0 00:14:31.558 }, 00:14:31.558 { 00:14:31.558 "name": null, 00:14:31.558 "uuid": "4a6484bb-3ec2-4925-a359-61b7154ddf1a", 00:14:31.558 "is_configured": false, 00:14:31.558 "data_offset": 0, 00:14:31.558 "data_size": 65536 00:14:31.558 }, 00:14:31.558 { 00:14:31.558 "name": "BaseBdev3", 00:14:31.558 "uuid": "40df8451-32f6-4e94-822c-cdcdca5ff64e", 00:14:31.558 "is_configured": true, 00:14:31.558 "data_offset": 0, 00:14:31.558 "data_size": 65536 00:14:31.558 } 00:14:31.558 ] 00:14:31.558 }' 00:14:31.558 13:24:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:31.558 13:24:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.818 13:24:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.818 13:24:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.818 13:24:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:31.818 13:24:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.818 13:24:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.079 13:24:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:14:32.079 13:24:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:32.079 13:24:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.079 13:24:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.079 [2024-11-17 13:24:21.098121] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:32.079 BaseBdev1 00:14:32.079 13:24:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.079 13:24:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:14:32.079 13:24:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:32.079 13:24:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:32.079 13:24:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:32.079 13:24:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:32.079 13:24:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:32.079 13:24:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:32.079 13:24:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.079 13:24:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.079 13:24:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.079 13:24:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:32.079 13:24:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.079 13:24:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.079 [ 00:14:32.079 { 00:14:32.079 "name": "BaseBdev1", 00:14:32.079 "aliases": [ 00:14:32.079 "97e43f55-8825-460b-a0c4-eeedcd769fd8" 00:14:32.079 ], 00:14:32.079 "product_name": "Malloc disk", 00:14:32.079 "block_size": 512, 00:14:32.079 "num_blocks": 65536, 00:14:32.079 "uuid": "97e43f55-8825-460b-a0c4-eeedcd769fd8", 00:14:32.079 "assigned_rate_limits": { 00:14:32.079 "rw_ios_per_sec": 0, 00:14:32.079 "rw_mbytes_per_sec": 0, 00:14:32.079 "r_mbytes_per_sec": 0, 00:14:32.079 "w_mbytes_per_sec": 0 00:14:32.079 }, 00:14:32.079 "claimed": true, 00:14:32.079 "claim_type": "exclusive_write", 00:14:32.079 "zoned": false, 00:14:32.079 "supported_io_types": { 00:14:32.079 "read": true, 00:14:32.079 "write": true, 00:14:32.079 "unmap": true, 00:14:32.079 "flush": true, 00:14:32.079 "reset": true, 00:14:32.079 "nvme_admin": false, 00:14:32.079 "nvme_io": false, 00:14:32.079 "nvme_io_md": false, 00:14:32.079 "write_zeroes": true, 00:14:32.079 "zcopy": true, 00:14:32.079 "get_zone_info": false, 00:14:32.079 "zone_management": false, 00:14:32.079 "zone_append": false, 00:14:32.079 "compare": false, 00:14:32.079 "compare_and_write": false, 00:14:32.079 "abort": true, 00:14:32.079 "seek_hole": false, 00:14:32.079 "seek_data": false, 00:14:32.079 "copy": true, 00:14:32.079 "nvme_iov_md": false 00:14:32.079 }, 00:14:32.079 "memory_domains": [ 00:14:32.079 { 00:14:32.079 "dma_device_id": "system", 00:14:32.079 "dma_device_type": 1 00:14:32.079 }, 00:14:32.079 { 00:14:32.079 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:32.079 "dma_device_type": 2 00:14:32.079 } 00:14:32.079 ], 00:14:32.079 "driver_specific": {} 00:14:32.079 } 00:14:32.079 ] 00:14:32.079 13:24:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.079 13:24:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:32.079 13:24:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:32.079 13:24:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:32.079 13:24:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:32.079 13:24:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:32.079 13:24:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:32.079 13:24:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:32.079 13:24:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:32.079 13:24:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:32.079 13:24:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:32.079 13:24:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:32.079 13:24:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:32.079 13:24:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:32.079 13:24:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.079 13:24:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.079 13:24:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.079 13:24:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:32.079 "name": "Existed_Raid", 00:14:32.079 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:32.079 "strip_size_kb": 64, 00:14:32.079 "state": "configuring", 00:14:32.079 "raid_level": "raid5f", 00:14:32.079 "superblock": false, 00:14:32.079 "num_base_bdevs": 3, 00:14:32.079 "num_base_bdevs_discovered": 2, 00:14:32.079 "num_base_bdevs_operational": 3, 00:14:32.079 "base_bdevs_list": [ 00:14:32.079 { 00:14:32.079 "name": "BaseBdev1", 00:14:32.079 "uuid": "97e43f55-8825-460b-a0c4-eeedcd769fd8", 00:14:32.079 "is_configured": true, 00:14:32.079 "data_offset": 0, 00:14:32.079 "data_size": 65536 00:14:32.079 }, 00:14:32.079 { 00:14:32.079 "name": null, 00:14:32.079 "uuid": "4a6484bb-3ec2-4925-a359-61b7154ddf1a", 00:14:32.079 "is_configured": false, 00:14:32.079 "data_offset": 0, 00:14:32.079 "data_size": 65536 00:14:32.079 }, 00:14:32.079 { 00:14:32.079 "name": "BaseBdev3", 00:14:32.079 "uuid": "40df8451-32f6-4e94-822c-cdcdca5ff64e", 00:14:32.079 "is_configured": true, 00:14:32.079 "data_offset": 0, 00:14:32.079 "data_size": 65536 00:14:32.079 } 00:14:32.079 ] 00:14:32.079 }' 00:14:32.079 13:24:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:32.079 13:24:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.649 13:24:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:32.649 13:24:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.649 13:24:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.649 13:24:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:32.649 13:24:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.649 13:24:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:14:32.649 13:24:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:14:32.649 13:24:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.649 13:24:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.649 [2024-11-17 13:24:21.649257] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:32.649 13:24:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.649 13:24:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:32.649 13:24:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:32.649 13:24:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:32.649 13:24:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:32.649 13:24:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:32.649 13:24:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:32.649 13:24:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:32.649 13:24:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:32.649 13:24:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:32.649 13:24:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:32.649 13:24:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:32.649 13:24:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.649 13:24:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.649 13:24:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:32.649 13:24:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.649 13:24:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:32.649 "name": "Existed_Raid", 00:14:32.649 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:32.649 "strip_size_kb": 64, 00:14:32.649 "state": "configuring", 00:14:32.649 "raid_level": "raid5f", 00:14:32.649 "superblock": false, 00:14:32.649 "num_base_bdevs": 3, 00:14:32.649 "num_base_bdevs_discovered": 1, 00:14:32.649 "num_base_bdevs_operational": 3, 00:14:32.649 "base_bdevs_list": [ 00:14:32.649 { 00:14:32.649 "name": "BaseBdev1", 00:14:32.649 "uuid": "97e43f55-8825-460b-a0c4-eeedcd769fd8", 00:14:32.649 "is_configured": true, 00:14:32.649 "data_offset": 0, 00:14:32.649 "data_size": 65536 00:14:32.649 }, 00:14:32.649 { 00:14:32.649 "name": null, 00:14:32.649 "uuid": "4a6484bb-3ec2-4925-a359-61b7154ddf1a", 00:14:32.649 "is_configured": false, 00:14:32.649 "data_offset": 0, 00:14:32.649 "data_size": 65536 00:14:32.649 }, 00:14:32.649 { 00:14:32.649 "name": null, 00:14:32.649 "uuid": "40df8451-32f6-4e94-822c-cdcdca5ff64e", 00:14:32.649 "is_configured": false, 00:14:32.649 "data_offset": 0, 00:14:32.649 "data_size": 65536 00:14:32.649 } 00:14:32.649 ] 00:14:32.649 }' 00:14:32.649 13:24:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:32.649 13:24:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.909 13:24:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:32.909 13:24:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.909 13:24:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.909 13:24:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:32.909 13:24:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.169 13:24:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:14:33.169 13:24:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:14:33.169 13:24:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.169 13:24:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:33.169 [2024-11-17 13:24:22.144415] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:33.169 13:24:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.169 13:24:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:33.169 13:24:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:33.169 13:24:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:33.169 13:24:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:33.169 13:24:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:33.169 13:24:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:33.169 13:24:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:33.169 13:24:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:33.169 13:24:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:33.169 13:24:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:33.169 13:24:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:33.169 13:24:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:33.169 13:24:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.169 13:24:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:33.169 13:24:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.169 13:24:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:33.169 "name": "Existed_Raid", 00:14:33.169 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:33.169 "strip_size_kb": 64, 00:14:33.169 "state": "configuring", 00:14:33.169 "raid_level": "raid5f", 00:14:33.170 "superblock": false, 00:14:33.170 "num_base_bdevs": 3, 00:14:33.170 "num_base_bdevs_discovered": 2, 00:14:33.170 "num_base_bdevs_operational": 3, 00:14:33.170 "base_bdevs_list": [ 00:14:33.170 { 00:14:33.170 "name": "BaseBdev1", 00:14:33.170 "uuid": "97e43f55-8825-460b-a0c4-eeedcd769fd8", 00:14:33.170 "is_configured": true, 00:14:33.170 "data_offset": 0, 00:14:33.170 "data_size": 65536 00:14:33.170 }, 00:14:33.170 { 00:14:33.170 "name": null, 00:14:33.170 "uuid": "4a6484bb-3ec2-4925-a359-61b7154ddf1a", 00:14:33.170 "is_configured": false, 00:14:33.170 "data_offset": 0, 00:14:33.170 "data_size": 65536 00:14:33.170 }, 00:14:33.170 { 00:14:33.170 "name": "BaseBdev3", 00:14:33.170 "uuid": "40df8451-32f6-4e94-822c-cdcdca5ff64e", 00:14:33.170 "is_configured": true, 00:14:33.170 "data_offset": 0, 00:14:33.170 "data_size": 65536 00:14:33.170 } 00:14:33.170 ] 00:14:33.170 }' 00:14:33.170 13:24:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:33.170 13:24:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:33.430 13:24:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:33.430 13:24:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:33.430 13:24:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.430 13:24:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:33.430 13:24:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.430 13:24:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:14:33.430 13:24:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:33.430 13:24:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.430 13:24:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:33.430 [2024-11-17 13:24:22.599649] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:33.690 13:24:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.690 13:24:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:33.690 13:24:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:33.690 13:24:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:33.690 13:24:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:33.690 13:24:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:33.690 13:24:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:33.690 13:24:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:33.690 13:24:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:33.690 13:24:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:33.690 13:24:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:33.690 13:24:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:33.690 13:24:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:33.690 13:24:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.690 13:24:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:33.690 13:24:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.690 13:24:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:33.690 "name": "Existed_Raid", 00:14:33.690 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:33.690 "strip_size_kb": 64, 00:14:33.690 "state": "configuring", 00:14:33.690 "raid_level": "raid5f", 00:14:33.690 "superblock": false, 00:14:33.690 "num_base_bdevs": 3, 00:14:33.690 "num_base_bdevs_discovered": 1, 00:14:33.690 "num_base_bdevs_operational": 3, 00:14:33.690 "base_bdevs_list": [ 00:14:33.690 { 00:14:33.690 "name": null, 00:14:33.690 "uuid": "97e43f55-8825-460b-a0c4-eeedcd769fd8", 00:14:33.690 "is_configured": false, 00:14:33.690 "data_offset": 0, 00:14:33.691 "data_size": 65536 00:14:33.691 }, 00:14:33.691 { 00:14:33.691 "name": null, 00:14:33.691 "uuid": "4a6484bb-3ec2-4925-a359-61b7154ddf1a", 00:14:33.691 "is_configured": false, 00:14:33.691 "data_offset": 0, 00:14:33.691 "data_size": 65536 00:14:33.691 }, 00:14:33.691 { 00:14:33.691 "name": "BaseBdev3", 00:14:33.691 "uuid": "40df8451-32f6-4e94-822c-cdcdca5ff64e", 00:14:33.691 "is_configured": true, 00:14:33.691 "data_offset": 0, 00:14:33.691 "data_size": 65536 00:14:33.691 } 00:14:33.691 ] 00:14:33.691 }' 00:14:33.691 13:24:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:33.691 13:24:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:33.950 13:24:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:34.210 13:24:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:34.210 13:24:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.210 13:24:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.210 13:24:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.210 13:24:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:14:34.210 13:24:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:14:34.210 13:24:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.210 13:24:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.210 [2024-11-17 13:24:23.203839] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:34.210 13:24:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.210 13:24:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:34.210 13:24:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:34.210 13:24:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:34.210 13:24:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:34.210 13:24:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:34.210 13:24:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:34.210 13:24:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:34.210 13:24:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:34.210 13:24:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:34.210 13:24:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:34.210 13:24:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:34.210 13:24:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.210 13:24:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:34.210 13:24:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.210 13:24:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.210 13:24:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:34.210 "name": "Existed_Raid", 00:14:34.210 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:34.210 "strip_size_kb": 64, 00:14:34.210 "state": "configuring", 00:14:34.210 "raid_level": "raid5f", 00:14:34.210 "superblock": false, 00:14:34.210 "num_base_bdevs": 3, 00:14:34.210 "num_base_bdevs_discovered": 2, 00:14:34.210 "num_base_bdevs_operational": 3, 00:14:34.210 "base_bdevs_list": [ 00:14:34.210 { 00:14:34.210 "name": null, 00:14:34.210 "uuid": "97e43f55-8825-460b-a0c4-eeedcd769fd8", 00:14:34.210 "is_configured": false, 00:14:34.210 "data_offset": 0, 00:14:34.210 "data_size": 65536 00:14:34.210 }, 00:14:34.210 { 00:14:34.210 "name": "BaseBdev2", 00:14:34.210 "uuid": "4a6484bb-3ec2-4925-a359-61b7154ddf1a", 00:14:34.210 "is_configured": true, 00:14:34.210 "data_offset": 0, 00:14:34.210 "data_size": 65536 00:14:34.210 }, 00:14:34.210 { 00:14:34.210 "name": "BaseBdev3", 00:14:34.210 "uuid": "40df8451-32f6-4e94-822c-cdcdca5ff64e", 00:14:34.210 "is_configured": true, 00:14:34.210 "data_offset": 0, 00:14:34.210 "data_size": 65536 00:14:34.210 } 00:14:34.210 ] 00:14:34.210 }' 00:14:34.210 13:24:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:34.210 13:24:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.470 13:24:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:34.470 13:24:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.470 13:24:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:34.470 13:24:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.470 13:24:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.470 13:24:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:14:34.470 13:24:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:34.470 13:24:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.470 13:24:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.470 13:24:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:14:34.470 13:24:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.731 13:24:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 97e43f55-8825-460b-a0c4-eeedcd769fd8 00:14:34.731 13:24:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.731 13:24:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.731 [2024-11-17 13:24:23.756236] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:14:34.731 [2024-11-17 13:24:23.756277] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:14:34.731 [2024-11-17 13:24:23.756285] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:14:34.731 [2024-11-17 13:24:23.756535] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:34.731 [2024-11-17 13:24:23.761894] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:14:34.731 [2024-11-17 13:24:23.761913] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:14:34.731 [2024-11-17 13:24:23.762202] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:34.731 NewBaseBdev 00:14:34.731 13:24:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.731 13:24:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:14:34.731 13:24:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:14:34.731 13:24:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:34.731 13:24:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:34.731 13:24:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:34.731 13:24:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:34.731 13:24:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:34.731 13:24:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.731 13:24:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.731 13:24:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.731 13:24:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:14:34.731 13:24:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.731 13:24:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.731 [ 00:14:34.731 { 00:14:34.731 "name": "NewBaseBdev", 00:14:34.731 "aliases": [ 00:14:34.731 "97e43f55-8825-460b-a0c4-eeedcd769fd8" 00:14:34.731 ], 00:14:34.731 "product_name": "Malloc disk", 00:14:34.731 "block_size": 512, 00:14:34.731 "num_blocks": 65536, 00:14:34.731 "uuid": "97e43f55-8825-460b-a0c4-eeedcd769fd8", 00:14:34.731 "assigned_rate_limits": { 00:14:34.731 "rw_ios_per_sec": 0, 00:14:34.731 "rw_mbytes_per_sec": 0, 00:14:34.731 "r_mbytes_per_sec": 0, 00:14:34.731 "w_mbytes_per_sec": 0 00:14:34.731 }, 00:14:34.731 "claimed": true, 00:14:34.731 "claim_type": "exclusive_write", 00:14:34.731 "zoned": false, 00:14:34.731 "supported_io_types": { 00:14:34.731 "read": true, 00:14:34.731 "write": true, 00:14:34.731 "unmap": true, 00:14:34.731 "flush": true, 00:14:34.731 "reset": true, 00:14:34.731 "nvme_admin": false, 00:14:34.731 "nvme_io": false, 00:14:34.731 "nvme_io_md": false, 00:14:34.731 "write_zeroes": true, 00:14:34.731 "zcopy": true, 00:14:34.731 "get_zone_info": false, 00:14:34.731 "zone_management": false, 00:14:34.731 "zone_append": false, 00:14:34.731 "compare": false, 00:14:34.731 "compare_and_write": false, 00:14:34.731 "abort": true, 00:14:34.731 "seek_hole": false, 00:14:34.731 "seek_data": false, 00:14:34.731 "copy": true, 00:14:34.731 "nvme_iov_md": false 00:14:34.731 }, 00:14:34.731 "memory_domains": [ 00:14:34.731 { 00:14:34.731 "dma_device_id": "system", 00:14:34.731 "dma_device_type": 1 00:14:34.731 }, 00:14:34.731 { 00:14:34.731 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:34.731 "dma_device_type": 2 00:14:34.731 } 00:14:34.731 ], 00:14:34.731 "driver_specific": {} 00:14:34.731 } 00:14:34.731 ] 00:14:34.731 13:24:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.731 13:24:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:34.731 13:24:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:14:34.731 13:24:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:34.731 13:24:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:34.731 13:24:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:34.731 13:24:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:34.731 13:24:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:34.731 13:24:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:34.731 13:24:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:34.731 13:24:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:34.731 13:24:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:34.731 13:24:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:34.731 13:24:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:34.731 13:24:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.731 13:24:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.731 13:24:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.731 13:24:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:34.731 "name": "Existed_Raid", 00:14:34.731 "uuid": "cfc52ec8-1fe4-4919-a32f-35f83518c118", 00:14:34.731 "strip_size_kb": 64, 00:14:34.731 "state": "online", 00:14:34.731 "raid_level": "raid5f", 00:14:34.731 "superblock": false, 00:14:34.731 "num_base_bdevs": 3, 00:14:34.731 "num_base_bdevs_discovered": 3, 00:14:34.731 "num_base_bdevs_operational": 3, 00:14:34.731 "base_bdevs_list": [ 00:14:34.731 { 00:14:34.731 "name": "NewBaseBdev", 00:14:34.731 "uuid": "97e43f55-8825-460b-a0c4-eeedcd769fd8", 00:14:34.731 "is_configured": true, 00:14:34.731 "data_offset": 0, 00:14:34.731 "data_size": 65536 00:14:34.731 }, 00:14:34.731 { 00:14:34.731 "name": "BaseBdev2", 00:14:34.731 "uuid": "4a6484bb-3ec2-4925-a359-61b7154ddf1a", 00:14:34.731 "is_configured": true, 00:14:34.731 "data_offset": 0, 00:14:34.731 "data_size": 65536 00:14:34.731 }, 00:14:34.731 { 00:14:34.731 "name": "BaseBdev3", 00:14:34.731 "uuid": "40df8451-32f6-4e94-822c-cdcdca5ff64e", 00:14:34.731 "is_configured": true, 00:14:34.731 "data_offset": 0, 00:14:34.731 "data_size": 65536 00:14:34.731 } 00:14:34.731 ] 00:14:34.731 }' 00:14:34.731 13:24:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:34.731 13:24:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.302 13:24:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:14:35.302 13:24:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:35.302 13:24:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:35.302 13:24:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:35.302 13:24:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:35.302 13:24:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:35.302 13:24:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:35.302 13:24:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.302 13:24:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:35.302 13:24:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.302 [2024-11-17 13:24:24.231933] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:35.302 13:24:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.302 13:24:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:35.302 "name": "Existed_Raid", 00:14:35.302 "aliases": [ 00:14:35.302 "cfc52ec8-1fe4-4919-a32f-35f83518c118" 00:14:35.302 ], 00:14:35.302 "product_name": "Raid Volume", 00:14:35.302 "block_size": 512, 00:14:35.302 "num_blocks": 131072, 00:14:35.302 "uuid": "cfc52ec8-1fe4-4919-a32f-35f83518c118", 00:14:35.302 "assigned_rate_limits": { 00:14:35.302 "rw_ios_per_sec": 0, 00:14:35.302 "rw_mbytes_per_sec": 0, 00:14:35.302 "r_mbytes_per_sec": 0, 00:14:35.302 "w_mbytes_per_sec": 0 00:14:35.302 }, 00:14:35.302 "claimed": false, 00:14:35.302 "zoned": false, 00:14:35.302 "supported_io_types": { 00:14:35.302 "read": true, 00:14:35.302 "write": true, 00:14:35.302 "unmap": false, 00:14:35.302 "flush": false, 00:14:35.302 "reset": true, 00:14:35.302 "nvme_admin": false, 00:14:35.302 "nvme_io": false, 00:14:35.302 "nvme_io_md": false, 00:14:35.302 "write_zeroes": true, 00:14:35.302 "zcopy": false, 00:14:35.302 "get_zone_info": false, 00:14:35.302 "zone_management": false, 00:14:35.302 "zone_append": false, 00:14:35.302 "compare": false, 00:14:35.302 "compare_and_write": false, 00:14:35.302 "abort": false, 00:14:35.302 "seek_hole": false, 00:14:35.302 "seek_data": false, 00:14:35.302 "copy": false, 00:14:35.302 "nvme_iov_md": false 00:14:35.302 }, 00:14:35.302 "driver_specific": { 00:14:35.302 "raid": { 00:14:35.302 "uuid": "cfc52ec8-1fe4-4919-a32f-35f83518c118", 00:14:35.302 "strip_size_kb": 64, 00:14:35.302 "state": "online", 00:14:35.302 "raid_level": "raid5f", 00:14:35.302 "superblock": false, 00:14:35.302 "num_base_bdevs": 3, 00:14:35.302 "num_base_bdevs_discovered": 3, 00:14:35.302 "num_base_bdevs_operational": 3, 00:14:35.302 "base_bdevs_list": [ 00:14:35.302 { 00:14:35.302 "name": "NewBaseBdev", 00:14:35.302 "uuid": "97e43f55-8825-460b-a0c4-eeedcd769fd8", 00:14:35.302 "is_configured": true, 00:14:35.302 "data_offset": 0, 00:14:35.302 "data_size": 65536 00:14:35.302 }, 00:14:35.302 { 00:14:35.302 "name": "BaseBdev2", 00:14:35.302 "uuid": "4a6484bb-3ec2-4925-a359-61b7154ddf1a", 00:14:35.302 "is_configured": true, 00:14:35.302 "data_offset": 0, 00:14:35.302 "data_size": 65536 00:14:35.302 }, 00:14:35.302 { 00:14:35.302 "name": "BaseBdev3", 00:14:35.302 "uuid": "40df8451-32f6-4e94-822c-cdcdca5ff64e", 00:14:35.302 "is_configured": true, 00:14:35.302 "data_offset": 0, 00:14:35.302 "data_size": 65536 00:14:35.302 } 00:14:35.302 ] 00:14:35.302 } 00:14:35.302 } 00:14:35.302 }' 00:14:35.302 13:24:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:35.302 13:24:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:14:35.302 BaseBdev2 00:14:35.302 BaseBdev3' 00:14:35.302 13:24:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:35.302 13:24:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:35.302 13:24:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:35.302 13:24:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:35.302 13:24:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:14:35.302 13:24:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.302 13:24:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.302 13:24:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.302 13:24:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:35.302 13:24:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:35.302 13:24:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:35.302 13:24:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:35.302 13:24:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.302 13:24:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.302 13:24:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:35.302 13:24:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.302 13:24:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:35.302 13:24:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:35.302 13:24:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:35.302 13:24:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:35.302 13:24:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:35.302 13:24:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.302 13:24:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.302 13:24:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.302 13:24:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:35.302 13:24:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:35.302 13:24:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:35.302 13:24:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.302 13:24:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.302 [2024-11-17 13:24:24.499309] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:35.302 [2024-11-17 13:24:24.499335] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:35.302 [2024-11-17 13:24:24.499406] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:35.302 [2024-11-17 13:24:24.499671] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:35.302 [2024-11-17 13:24:24.499685] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:14:35.302 13:24:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.302 13:24:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 79787 00:14:35.302 13:24:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 79787 ']' 00:14:35.302 13:24:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # kill -0 79787 00:14:35.302 13:24:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # uname 00:14:35.302 13:24:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:35.302 13:24:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79787 00:14:35.563 13:24:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:35.563 13:24:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:35.563 13:24:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79787' 00:14:35.563 killing process with pid 79787 00:14:35.563 13:24:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@973 -- # kill 79787 00:14:35.563 [2024-11-17 13:24:24.542559] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:35.563 13:24:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@978 -- # wait 79787 00:14:35.823 [2024-11-17 13:24:24.827145] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:36.764 13:24:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:14:36.764 00:14:36.764 real 0m10.298s 00:14:36.764 user 0m16.301s 00:14:36.764 sys 0m1.939s 00:14:36.764 13:24:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:36.764 13:24:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.764 ************************************ 00:14:36.764 END TEST raid5f_state_function_test 00:14:36.764 ************************************ 00:14:36.764 13:24:25 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 3 true 00:14:36.764 13:24:25 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:14:36.764 13:24:25 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:36.764 13:24:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:36.764 ************************************ 00:14:36.764 START TEST raid5f_state_function_test_sb 00:14:36.764 ************************************ 00:14:36.764 13:24:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 3 true 00:14:36.764 13:24:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:14:36.764 13:24:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:14:36.764 13:24:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:14:36.764 13:24:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:14:36.764 13:24:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:14:36.764 13:24:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:36.764 13:24:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:14:36.764 13:24:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:36.764 13:24:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:36.764 13:24:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:14:36.764 13:24:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:36.764 13:24:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:36.764 13:24:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:14:36.764 13:24:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:36.764 13:24:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:36.764 13:24:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:14:36.764 13:24:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:14:36.764 13:24:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:14:36.764 13:24:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:14:36.764 13:24:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:14:36.764 13:24:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:14:36.764 13:24:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:14:36.764 13:24:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:14:36.764 13:24:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:14:36.764 13:24:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:14:36.764 13:24:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:14:36.764 13:24:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=80403 00:14:36.764 Process raid pid: 80403 00:14:36.764 13:24:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:36.764 13:24:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80403' 00:14:36.764 13:24:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 80403 00:14:36.764 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:36.764 13:24:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 80403 ']' 00:14:36.764 13:24:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:36.764 13:24:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:36.764 13:24:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:36.764 13:24:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:36.764 13:24:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:37.024 [2024-11-17 13:24:26.050991] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:14:37.024 [2024-11-17 13:24:26.051222] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:37.024 [2024-11-17 13:24:26.211311] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:37.284 [2024-11-17 13:24:26.327897] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:37.544 [2024-11-17 13:24:26.540669] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:37.544 [2024-11-17 13:24:26.540706] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:37.804 13:24:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:37.804 13:24:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:14:37.804 13:24:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:37.804 13:24:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.804 13:24:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:37.804 [2024-11-17 13:24:26.876010] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:37.804 [2024-11-17 13:24:26.876114] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:37.804 [2024-11-17 13:24:26.876147] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:37.804 [2024-11-17 13:24:26.876220] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:37.804 [2024-11-17 13:24:26.876255] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:37.804 [2024-11-17 13:24:26.876279] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:37.804 13:24:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.804 13:24:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:37.804 13:24:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:37.804 13:24:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:37.804 13:24:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:37.804 13:24:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:37.804 13:24:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:37.804 13:24:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:37.804 13:24:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:37.804 13:24:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:37.804 13:24:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:37.804 13:24:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:37.804 13:24:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:37.804 13:24:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.804 13:24:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:37.804 13:24:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.804 13:24:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:37.804 "name": "Existed_Raid", 00:14:37.804 "uuid": "fbe6df78-fd89-47ad-af7c-9cfafcd19198", 00:14:37.804 "strip_size_kb": 64, 00:14:37.804 "state": "configuring", 00:14:37.804 "raid_level": "raid5f", 00:14:37.804 "superblock": true, 00:14:37.804 "num_base_bdevs": 3, 00:14:37.804 "num_base_bdevs_discovered": 0, 00:14:37.804 "num_base_bdevs_operational": 3, 00:14:37.804 "base_bdevs_list": [ 00:14:37.804 { 00:14:37.804 "name": "BaseBdev1", 00:14:37.804 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:37.804 "is_configured": false, 00:14:37.804 "data_offset": 0, 00:14:37.804 "data_size": 0 00:14:37.804 }, 00:14:37.804 { 00:14:37.804 "name": "BaseBdev2", 00:14:37.804 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:37.804 "is_configured": false, 00:14:37.804 "data_offset": 0, 00:14:37.804 "data_size": 0 00:14:37.804 }, 00:14:37.804 { 00:14:37.804 "name": "BaseBdev3", 00:14:37.804 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:37.804 "is_configured": false, 00:14:37.804 "data_offset": 0, 00:14:37.804 "data_size": 0 00:14:37.804 } 00:14:37.804 ] 00:14:37.804 }' 00:14:37.804 13:24:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:37.804 13:24:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:38.381 13:24:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:38.381 13:24:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.381 13:24:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:38.381 [2024-11-17 13:24:27.327187] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:38.381 [2024-11-17 13:24:27.327237] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:14:38.381 13:24:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.381 13:24:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:38.381 13:24:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.381 13:24:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:38.381 [2024-11-17 13:24:27.339153] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:38.381 [2024-11-17 13:24:27.339255] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:38.381 [2024-11-17 13:24:27.339316] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:38.381 [2024-11-17 13:24:27.339360] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:38.381 [2024-11-17 13:24:27.339397] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:38.381 [2024-11-17 13:24:27.339437] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:38.381 13:24:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.381 13:24:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:38.381 13:24:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.381 13:24:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:38.381 [2024-11-17 13:24:27.387705] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:38.381 BaseBdev1 00:14:38.381 13:24:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.381 13:24:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:14:38.381 13:24:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:38.381 13:24:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:38.381 13:24:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:38.381 13:24:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:38.381 13:24:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:38.381 13:24:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:38.381 13:24:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.381 13:24:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:38.381 13:24:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.381 13:24:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:38.381 13:24:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.381 13:24:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:38.381 [ 00:14:38.381 { 00:14:38.381 "name": "BaseBdev1", 00:14:38.381 "aliases": [ 00:14:38.381 "1b4571b3-1954-4c5d-a0dd-b892a32ea323" 00:14:38.381 ], 00:14:38.381 "product_name": "Malloc disk", 00:14:38.381 "block_size": 512, 00:14:38.381 "num_blocks": 65536, 00:14:38.381 "uuid": "1b4571b3-1954-4c5d-a0dd-b892a32ea323", 00:14:38.381 "assigned_rate_limits": { 00:14:38.381 "rw_ios_per_sec": 0, 00:14:38.381 "rw_mbytes_per_sec": 0, 00:14:38.381 "r_mbytes_per_sec": 0, 00:14:38.381 "w_mbytes_per_sec": 0 00:14:38.381 }, 00:14:38.381 "claimed": true, 00:14:38.381 "claim_type": "exclusive_write", 00:14:38.381 "zoned": false, 00:14:38.381 "supported_io_types": { 00:14:38.381 "read": true, 00:14:38.381 "write": true, 00:14:38.381 "unmap": true, 00:14:38.381 "flush": true, 00:14:38.381 "reset": true, 00:14:38.381 "nvme_admin": false, 00:14:38.381 "nvme_io": false, 00:14:38.381 "nvme_io_md": false, 00:14:38.381 "write_zeroes": true, 00:14:38.381 "zcopy": true, 00:14:38.381 "get_zone_info": false, 00:14:38.381 "zone_management": false, 00:14:38.381 "zone_append": false, 00:14:38.381 "compare": false, 00:14:38.381 "compare_and_write": false, 00:14:38.381 "abort": true, 00:14:38.381 "seek_hole": false, 00:14:38.381 "seek_data": false, 00:14:38.381 "copy": true, 00:14:38.381 "nvme_iov_md": false 00:14:38.381 }, 00:14:38.381 "memory_domains": [ 00:14:38.382 { 00:14:38.382 "dma_device_id": "system", 00:14:38.382 "dma_device_type": 1 00:14:38.382 }, 00:14:38.382 { 00:14:38.382 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:38.382 "dma_device_type": 2 00:14:38.382 } 00:14:38.382 ], 00:14:38.382 "driver_specific": {} 00:14:38.382 } 00:14:38.382 ] 00:14:38.382 13:24:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.382 13:24:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:38.382 13:24:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:38.382 13:24:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:38.382 13:24:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:38.382 13:24:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:38.382 13:24:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:38.382 13:24:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:38.382 13:24:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:38.382 13:24:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:38.382 13:24:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:38.382 13:24:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:38.382 13:24:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:38.382 13:24:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:38.382 13:24:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.382 13:24:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:38.382 13:24:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.382 13:24:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:38.382 "name": "Existed_Raid", 00:14:38.382 "uuid": "8a45c2c5-1818-47a0-bdc9-1f00b0901889", 00:14:38.382 "strip_size_kb": 64, 00:14:38.382 "state": "configuring", 00:14:38.382 "raid_level": "raid5f", 00:14:38.382 "superblock": true, 00:14:38.382 "num_base_bdevs": 3, 00:14:38.382 "num_base_bdevs_discovered": 1, 00:14:38.382 "num_base_bdevs_operational": 3, 00:14:38.382 "base_bdevs_list": [ 00:14:38.382 { 00:14:38.382 "name": "BaseBdev1", 00:14:38.382 "uuid": "1b4571b3-1954-4c5d-a0dd-b892a32ea323", 00:14:38.382 "is_configured": true, 00:14:38.382 "data_offset": 2048, 00:14:38.382 "data_size": 63488 00:14:38.382 }, 00:14:38.382 { 00:14:38.382 "name": "BaseBdev2", 00:14:38.382 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:38.382 "is_configured": false, 00:14:38.382 "data_offset": 0, 00:14:38.382 "data_size": 0 00:14:38.382 }, 00:14:38.382 { 00:14:38.382 "name": "BaseBdev3", 00:14:38.382 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:38.382 "is_configured": false, 00:14:38.382 "data_offset": 0, 00:14:38.382 "data_size": 0 00:14:38.382 } 00:14:38.382 ] 00:14:38.382 }' 00:14:38.382 13:24:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:38.382 13:24:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:38.654 13:24:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:38.654 13:24:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.654 13:24:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:38.654 [2024-11-17 13:24:27.811061] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:38.654 [2024-11-17 13:24:27.811192] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:14:38.654 13:24:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.654 13:24:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:38.654 13:24:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.654 13:24:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:38.654 [2024-11-17 13:24:27.823080] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:38.654 [2024-11-17 13:24:27.825071] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:38.654 [2024-11-17 13:24:27.825154] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:38.654 [2024-11-17 13:24:27.825200] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:38.654 [2024-11-17 13:24:27.825251] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:38.654 13:24:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.654 13:24:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:14:38.654 13:24:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:38.654 13:24:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:38.654 13:24:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:38.654 13:24:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:38.654 13:24:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:38.654 13:24:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:38.654 13:24:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:38.654 13:24:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:38.654 13:24:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:38.654 13:24:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:38.654 13:24:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:38.654 13:24:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:38.654 13:24:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.654 13:24:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:38.654 13:24:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:38.654 13:24:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.914 13:24:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:38.914 "name": "Existed_Raid", 00:14:38.914 "uuid": "bd3b860b-787f-4916-9c65-a550355e87fa", 00:14:38.914 "strip_size_kb": 64, 00:14:38.914 "state": "configuring", 00:14:38.914 "raid_level": "raid5f", 00:14:38.914 "superblock": true, 00:14:38.914 "num_base_bdevs": 3, 00:14:38.914 "num_base_bdevs_discovered": 1, 00:14:38.914 "num_base_bdevs_operational": 3, 00:14:38.914 "base_bdevs_list": [ 00:14:38.914 { 00:14:38.914 "name": "BaseBdev1", 00:14:38.914 "uuid": "1b4571b3-1954-4c5d-a0dd-b892a32ea323", 00:14:38.914 "is_configured": true, 00:14:38.914 "data_offset": 2048, 00:14:38.914 "data_size": 63488 00:14:38.914 }, 00:14:38.914 { 00:14:38.914 "name": "BaseBdev2", 00:14:38.914 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:38.914 "is_configured": false, 00:14:38.914 "data_offset": 0, 00:14:38.914 "data_size": 0 00:14:38.914 }, 00:14:38.914 { 00:14:38.914 "name": "BaseBdev3", 00:14:38.914 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:38.914 "is_configured": false, 00:14:38.914 "data_offset": 0, 00:14:38.914 "data_size": 0 00:14:38.914 } 00:14:38.914 ] 00:14:38.914 }' 00:14:38.914 13:24:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:38.914 13:24:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:39.175 13:24:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:39.175 13:24:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.175 13:24:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:39.175 [2024-11-17 13:24:28.287521] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:39.175 BaseBdev2 00:14:39.175 13:24:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.175 13:24:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:14:39.175 13:24:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:39.175 13:24:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:39.175 13:24:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:39.175 13:24:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:39.175 13:24:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:39.175 13:24:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:39.175 13:24:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.175 13:24:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:39.175 13:24:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.175 13:24:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:39.175 13:24:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.175 13:24:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:39.175 [ 00:14:39.175 { 00:14:39.175 "name": "BaseBdev2", 00:14:39.175 "aliases": [ 00:14:39.175 "ef5a0ecb-d138-4985-963c-7d622f8d87fa" 00:14:39.175 ], 00:14:39.175 "product_name": "Malloc disk", 00:14:39.175 "block_size": 512, 00:14:39.175 "num_blocks": 65536, 00:14:39.175 "uuid": "ef5a0ecb-d138-4985-963c-7d622f8d87fa", 00:14:39.175 "assigned_rate_limits": { 00:14:39.175 "rw_ios_per_sec": 0, 00:14:39.175 "rw_mbytes_per_sec": 0, 00:14:39.175 "r_mbytes_per_sec": 0, 00:14:39.175 "w_mbytes_per_sec": 0 00:14:39.175 }, 00:14:39.175 "claimed": true, 00:14:39.175 "claim_type": "exclusive_write", 00:14:39.175 "zoned": false, 00:14:39.175 "supported_io_types": { 00:14:39.175 "read": true, 00:14:39.175 "write": true, 00:14:39.175 "unmap": true, 00:14:39.175 "flush": true, 00:14:39.175 "reset": true, 00:14:39.175 "nvme_admin": false, 00:14:39.175 "nvme_io": false, 00:14:39.175 "nvme_io_md": false, 00:14:39.175 "write_zeroes": true, 00:14:39.175 "zcopy": true, 00:14:39.175 "get_zone_info": false, 00:14:39.175 "zone_management": false, 00:14:39.175 "zone_append": false, 00:14:39.175 "compare": false, 00:14:39.175 "compare_and_write": false, 00:14:39.175 "abort": true, 00:14:39.175 "seek_hole": false, 00:14:39.175 "seek_data": false, 00:14:39.175 "copy": true, 00:14:39.175 "nvme_iov_md": false 00:14:39.175 }, 00:14:39.175 "memory_domains": [ 00:14:39.175 { 00:14:39.175 "dma_device_id": "system", 00:14:39.175 "dma_device_type": 1 00:14:39.175 }, 00:14:39.175 { 00:14:39.175 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:39.175 "dma_device_type": 2 00:14:39.175 } 00:14:39.175 ], 00:14:39.175 "driver_specific": {} 00:14:39.175 } 00:14:39.175 ] 00:14:39.175 13:24:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.175 13:24:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:39.175 13:24:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:39.175 13:24:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:39.175 13:24:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:39.175 13:24:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:39.175 13:24:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:39.175 13:24:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:39.175 13:24:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:39.175 13:24:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:39.175 13:24:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:39.175 13:24:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:39.175 13:24:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:39.175 13:24:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:39.175 13:24:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:39.175 13:24:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:39.175 13:24:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.175 13:24:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:39.175 13:24:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.175 13:24:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:39.175 "name": "Existed_Raid", 00:14:39.175 "uuid": "bd3b860b-787f-4916-9c65-a550355e87fa", 00:14:39.175 "strip_size_kb": 64, 00:14:39.175 "state": "configuring", 00:14:39.175 "raid_level": "raid5f", 00:14:39.175 "superblock": true, 00:14:39.175 "num_base_bdevs": 3, 00:14:39.175 "num_base_bdevs_discovered": 2, 00:14:39.175 "num_base_bdevs_operational": 3, 00:14:39.175 "base_bdevs_list": [ 00:14:39.175 { 00:14:39.175 "name": "BaseBdev1", 00:14:39.175 "uuid": "1b4571b3-1954-4c5d-a0dd-b892a32ea323", 00:14:39.175 "is_configured": true, 00:14:39.175 "data_offset": 2048, 00:14:39.175 "data_size": 63488 00:14:39.175 }, 00:14:39.175 { 00:14:39.175 "name": "BaseBdev2", 00:14:39.175 "uuid": "ef5a0ecb-d138-4985-963c-7d622f8d87fa", 00:14:39.175 "is_configured": true, 00:14:39.175 "data_offset": 2048, 00:14:39.175 "data_size": 63488 00:14:39.175 }, 00:14:39.175 { 00:14:39.175 "name": "BaseBdev3", 00:14:39.175 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:39.175 "is_configured": false, 00:14:39.175 "data_offset": 0, 00:14:39.175 "data_size": 0 00:14:39.175 } 00:14:39.175 ] 00:14:39.175 }' 00:14:39.175 13:24:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:39.175 13:24:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:39.745 13:24:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:39.745 13:24:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.745 13:24:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:39.745 [2024-11-17 13:24:28.731010] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:39.745 [2024-11-17 13:24:28.731360] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:39.745 [2024-11-17 13:24:28.731385] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:39.745 BaseBdev3 00:14:39.745 [2024-11-17 13:24:28.731675] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:39.745 13:24:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.745 13:24:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:14:39.745 13:24:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:39.745 13:24:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:39.745 13:24:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:39.745 13:24:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:39.745 13:24:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:39.745 13:24:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:39.745 13:24:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.745 13:24:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:39.745 [2024-11-17 13:24:28.738357] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:39.745 [2024-11-17 13:24:28.738381] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:14:39.745 [2024-11-17 13:24:28.738673] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:39.746 13:24:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.746 13:24:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:39.746 13:24:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.746 13:24:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:39.746 [ 00:14:39.746 { 00:14:39.746 "name": "BaseBdev3", 00:14:39.746 "aliases": [ 00:14:39.746 "57228747-d624-475b-a2a0-e24d634360ff" 00:14:39.746 ], 00:14:39.746 "product_name": "Malloc disk", 00:14:39.746 "block_size": 512, 00:14:39.746 "num_blocks": 65536, 00:14:39.746 "uuid": "57228747-d624-475b-a2a0-e24d634360ff", 00:14:39.746 "assigned_rate_limits": { 00:14:39.746 "rw_ios_per_sec": 0, 00:14:39.746 "rw_mbytes_per_sec": 0, 00:14:39.746 "r_mbytes_per_sec": 0, 00:14:39.746 "w_mbytes_per_sec": 0 00:14:39.746 }, 00:14:39.746 "claimed": true, 00:14:39.746 "claim_type": "exclusive_write", 00:14:39.746 "zoned": false, 00:14:39.746 "supported_io_types": { 00:14:39.746 "read": true, 00:14:39.746 "write": true, 00:14:39.746 "unmap": true, 00:14:39.746 "flush": true, 00:14:39.746 "reset": true, 00:14:39.746 "nvme_admin": false, 00:14:39.746 "nvme_io": false, 00:14:39.746 "nvme_io_md": false, 00:14:39.746 "write_zeroes": true, 00:14:39.746 "zcopy": true, 00:14:39.746 "get_zone_info": false, 00:14:39.746 "zone_management": false, 00:14:39.746 "zone_append": false, 00:14:39.746 "compare": false, 00:14:39.746 "compare_and_write": false, 00:14:39.746 "abort": true, 00:14:39.746 "seek_hole": false, 00:14:39.746 "seek_data": false, 00:14:39.746 "copy": true, 00:14:39.746 "nvme_iov_md": false 00:14:39.746 }, 00:14:39.746 "memory_domains": [ 00:14:39.746 { 00:14:39.746 "dma_device_id": "system", 00:14:39.746 "dma_device_type": 1 00:14:39.746 }, 00:14:39.746 { 00:14:39.746 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:39.746 "dma_device_type": 2 00:14:39.746 } 00:14:39.746 ], 00:14:39.746 "driver_specific": {} 00:14:39.746 } 00:14:39.746 ] 00:14:39.746 13:24:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.746 13:24:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:39.746 13:24:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:39.746 13:24:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:39.746 13:24:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:14:39.746 13:24:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:39.746 13:24:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:39.746 13:24:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:39.746 13:24:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:39.746 13:24:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:39.746 13:24:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:39.746 13:24:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:39.746 13:24:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:39.746 13:24:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:39.746 13:24:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:39.746 13:24:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:39.746 13:24:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.746 13:24:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:39.746 13:24:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.746 13:24:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:39.746 "name": "Existed_Raid", 00:14:39.746 "uuid": "bd3b860b-787f-4916-9c65-a550355e87fa", 00:14:39.746 "strip_size_kb": 64, 00:14:39.746 "state": "online", 00:14:39.746 "raid_level": "raid5f", 00:14:39.746 "superblock": true, 00:14:39.746 "num_base_bdevs": 3, 00:14:39.746 "num_base_bdevs_discovered": 3, 00:14:39.746 "num_base_bdevs_operational": 3, 00:14:39.746 "base_bdevs_list": [ 00:14:39.746 { 00:14:39.746 "name": "BaseBdev1", 00:14:39.746 "uuid": "1b4571b3-1954-4c5d-a0dd-b892a32ea323", 00:14:39.746 "is_configured": true, 00:14:39.746 "data_offset": 2048, 00:14:39.746 "data_size": 63488 00:14:39.746 }, 00:14:39.746 { 00:14:39.746 "name": "BaseBdev2", 00:14:39.746 "uuid": "ef5a0ecb-d138-4985-963c-7d622f8d87fa", 00:14:39.746 "is_configured": true, 00:14:39.746 "data_offset": 2048, 00:14:39.746 "data_size": 63488 00:14:39.746 }, 00:14:39.746 { 00:14:39.746 "name": "BaseBdev3", 00:14:39.746 "uuid": "57228747-d624-475b-a2a0-e24d634360ff", 00:14:39.746 "is_configured": true, 00:14:39.746 "data_offset": 2048, 00:14:39.746 "data_size": 63488 00:14:39.746 } 00:14:39.746 ] 00:14:39.746 }' 00:14:39.746 13:24:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:39.746 13:24:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:40.006 13:24:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:14:40.006 13:24:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:40.006 13:24:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:40.006 13:24:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:40.006 13:24:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:14:40.006 13:24:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:40.006 13:24:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:40.006 13:24:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.006 13:24:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:40.006 13:24:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:40.006 [2024-11-17 13:24:29.136825] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:40.006 13:24:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.006 13:24:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:40.006 "name": "Existed_Raid", 00:14:40.006 "aliases": [ 00:14:40.006 "bd3b860b-787f-4916-9c65-a550355e87fa" 00:14:40.006 ], 00:14:40.006 "product_name": "Raid Volume", 00:14:40.006 "block_size": 512, 00:14:40.006 "num_blocks": 126976, 00:14:40.006 "uuid": "bd3b860b-787f-4916-9c65-a550355e87fa", 00:14:40.006 "assigned_rate_limits": { 00:14:40.006 "rw_ios_per_sec": 0, 00:14:40.006 "rw_mbytes_per_sec": 0, 00:14:40.006 "r_mbytes_per_sec": 0, 00:14:40.006 "w_mbytes_per_sec": 0 00:14:40.006 }, 00:14:40.006 "claimed": false, 00:14:40.006 "zoned": false, 00:14:40.006 "supported_io_types": { 00:14:40.006 "read": true, 00:14:40.006 "write": true, 00:14:40.006 "unmap": false, 00:14:40.006 "flush": false, 00:14:40.006 "reset": true, 00:14:40.006 "nvme_admin": false, 00:14:40.006 "nvme_io": false, 00:14:40.006 "nvme_io_md": false, 00:14:40.006 "write_zeroes": true, 00:14:40.006 "zcopy": false, 00:14:40.006 "get_zone_info": false, 00:14:40.006 "zone_management": false, 00:14:40.006 "zone_append": false, 00:14:40.006 "compare": false, 00:14:40.006 "compare_and_write": false, 00:14:40.006 "abort": false, 00:14:40.006 "seek_hole": false, 00:14:40.006 "seek_data": false, 00:14:40.006 "copy": false, 00:14:40.006 "nvme_iov_md": false 00:14:40.006 }, 00:14:40.006 "driver_specific": { 00:14:40.006 "raid": { 00:14:40.006 "uuid": "bd3b860b-787f-4916-9c65-a550355e87fa", 00:14:40.006 "strip_size_kb": 64, 00:14:40.006 "state": "online", 00:14:40.006 "raid_level": "raid5f", 00:14:40.006 "superblock": true, 00:14:40.006 "num_base_bdevs": 3, 00:14:40.006 "num_base_bdevs_discovered": 3, 00:14:40.006 "num_base_bdevs_operational": 3, 00:14:40.006 "base_bdevs_list": [ 00:14:40.006 { 00:14:40.006 "name": "BaseBdev1", 00:14:40.006 "uuid": "1b4571b3-1954-4c5d-a0dd-b892a32ea323", 00:14:40.006 "is_configured": true, 00:14:40.006 "data_offset": 2048, 00:14:40.006 "data_size": 63488 00:14:40.006 }, 00:14:40.006 { 00:14:40.006 "name": "BaseBdev2", 00:14:40.006 "uuid": "ef5a0ecb-d138-4985-963c-7d622f8d87fa", 00:14:40.006 "is_configured": true, 00:14:40.006 "data_offset": 2048, 00:14:40.006 "data_size": 63488 00:14:40.006 }, 00:14:40.006 { 00:14:40.006 "name": "BaseBdev3", 00:14:40.006 "uuid": "57228747-d624-475b-a2a0-e24d634360ff", 00:14:40.006 "is_configured": true, 00:14:40.006 "data_offset": 2048, 00:14:40.006 "data_size": 63488 00:14:40.006 } 00:14:40.006 ] 00:14:40.006 } 00:14:40.006 } 00:14:40.006 }' 00:14:40.006 13:24:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:40.006 13:24:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:14:40.006 BaseBdev2 00:14:40.006 BaseBdev3' 00:14:40.007 13:24:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:40.007 13:24:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:40.007 13:24:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:40.007 13:24:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:14:40.007 13:24:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.007 13:24:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:40.007 13:24:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:40.266 13:24:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.266 13:24:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:40.266 13:24:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:40.266 13:24:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:40.266 13:24:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:40.266 13:24:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.266 13:24:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:40.266 13:24:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:40.266 13:24:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.266 13:24:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:40.266 13:24:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:40.266 13:24:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:40.266 13:24:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:40.266 13:24:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:40.266 13:24:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.266 13:24:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:40.266 13:24:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.266 13:24:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:40.266 13:24:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:40.266 13:24:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:40.266 13:24:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.266 13:24:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:40.266 [2024-11-17 13:24:29.308376] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:40.266 13:24:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.266 13:24:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:14:40.266 13:24:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:14:40.266 13:24:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:40.266 13:24:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:14:40.266 13:24:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:14:40.266 13:24:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:14:40.266 13:24:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:40.266 13:24:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:40.266 13:24:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:40.266 13:24:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:40.266 13:24:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:40.266 13:24:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:40.266 13:24:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:40.266 13:24:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:40.266 13:24:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:40.266 13:24:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:40.266 13:24:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.266 13:24:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:40.266 13:24:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:40.266 13:24:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.266 13:24:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:40.266 "name": "Existed_Raid", 00:14:40.266 "uuid": "bd3b860b-787f-4916-9c65-a550355e87fa", 00:14:40.266 "strip_size_kb": 64, 00:14:40.266 "state": "online", 00:14:40.266 "raid_level": "raid5f", 00:14:40.266 "superblock": true, 00:14:40.266 "num_base_bdevs": 3, 00:14:40.266 "num_base_bdevs_discovered": 2, 00:14:40.266 "num_base_bdevs_operational": 2, 00:14:40.266 "base_bdevs_list": [ 00:14:40.266 { 00:14:40.266 "name": null, 00:14:40.266 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:40.266 "is_configured": false, 00:14:40.266 "data_offset": 0, 00:14:40.266 "data_size": 63488 00:14:40.266 }, 00:14:40.266 { 00:14:40.266 "name": "BaseBdev2", 00:14:40.266 "uuid": "ef5a0ecb-d138-4985-963c-7d622f8d87fa", 00:14:40.266 "is_configured": true, 00:14:40.266 "data_offset": 2048, 00:14:40.266 "data_size": 63488 00:14:40.266 }, 00:14:40.266 { 00:14:40.266 "name": "BaseBdev3", 00:14:40.266 "uuid": "57228747-d624-475b-a2a0-e24d634360ff", 00:14:40.266 "is_configured": true, 00:14:40.266 "data_offset": 2048, 00:14:40.266 "data_size": 63488 00:14:40.266 } 00:14:40.266 ] 00:14:40.266 }' 00:14:40.266 13:24:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:40.266 13:24:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:40.835 13:24:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:14:40.835 13:24:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:40.835 13:24:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:40.835 13:24:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:40.835 13:24:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.835 13:24:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:40.835 13:24:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.836 13:24:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:40.836 13:24:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:40.836 13:24:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:14:40.836 13:24:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.836 13:24:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:40.836 [2024-11-17 13:24:29.844383] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:40.836 [2024-11-17 13:24:29.844577] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:40.836 [2024-11-17 13:24:29.945896] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:40.836 13:24:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.836 13:24:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:40.836 13:24:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:40.836 13:24:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:40.836 13:24:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.836 13:24:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:40.836 13:24:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:40.836 13:24:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.836 13:24:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:40.836 13:24:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:40.836 13:24:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:14:40.836 13:24:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.836 13:24:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:40.836 [2024-11-17 13:24:30.001846] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:40.836 [2024-11-17 13:24:30.001904] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:14:41.095 13:24:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.095 13:24:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:41.095 13:24:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:41.095 13:24:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:41.095 13:24:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.095 13:24:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:41.095 13:24:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:14:41.095 13:24:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.095 13:24:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:14:41.095 13:24:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:14:41.095 13:24:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:14:41.095 13:24:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:14:41.095 13:24:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:41.095 13:24:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:41.095 13:24:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.095 13:24:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:41.095 BaseBdev2 00:14:41.095 13:24:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.095 13:24:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:14:41.095 13:24:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:41.095 13:24:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:41.095 13:24:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:41.095 13:24:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:41.095 13:24:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:41.095 13:24:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:41.095 13:24:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.095 13:24:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:41.095 13:24:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.095 13:24:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:41.095 13:24:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.095 13:24:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:41.095 [ 00:14:41.095 { 00:14:41.095 "name": "BaseBdev2", 00:14:41.095 "aliases": [ 00:14:41.095 "675937e9-f6f2-42b4-bbef-2f016365d695" 00:14:41.095 ], 00:14:41.095 "product_name": "Malloc disk", 00:14:41.095 "block_size": 512, 00:14:41.095 "num_blocks": 65536, 00:14:41.095 "uuid": "675937e9-f6f2-42b4-bbef-2f016365d695", 00:14:41.095 "assigned_rate_limits": { 00:14:41.095 "rw_ios_per_sec": 0, 00:14:41.095 "rw_mbytes_per_sec": 0, 00:14:41.095 "r_mbytes_per_sec": 0, 00:14:41.095 "w_mbytes_per_sec": 0 00:14:41.095 }, 00:14:41.095 "claimed": false, 00:14:41.095 "zoned": false, 00:14:41.095 "supported_io_types": { 00:14:41.095 "read": true, 00:14:41.095 "write": true, 00:14:41.095 "unmap": true, 00:14:41.095 "flush": true, 00:14:41.095 "reset": true, 00:14:41.095 "nvme_admin": false, 00:14:41.095 "nvme_io": false, 00:14:41.095 "nvme_io_md": false, 00:14:41.095 "write_zeroes": true, 00:14:41.095 "zcopy": true, 00:14:41.095 "get_zone_info": false, 00:14:41.095 "zone_management": false, 00:14:41.095 "zone_append": false, 00:14:41.095 "compare": false, 00:14:41.095 "compare_and_write": false, 00:14:41.095 "abort": true, 00:14:41.095 "seek_hole": false, 00:14:41.095 "seek_data": false, 00:14:41.095 "copy": true, 00:14:41.095 "nvme_iov_md": false 00:14:41.095 }, 00:14:41.095 "memory_domains": [ 00:14:41.095 { 00:14:41.095 "dma_device_id": "system", 00:14:41.095 "dma_device_type": 1 00:14:41.095 }, 00:14:41.095 { 00:14:41.095 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:41.095 "dma_device_type": 2 00:14:41.095 } 00:14:41.095 ], 00:14:41.095 "driver_specific": {} 00:14:41.095 } 00:14:41.095 ] 00:14:41.095 13:24:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.095 13:24:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:41.095 13:24:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:41.095 13:24:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:41.095 13:24:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:41.095 13:24:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.095 13:24:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:41.095 BaseBdev3 00:14:41.095 13:24:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.095 13:24:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:14:41.095 13:24:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:41.095 13:24:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:41.095 13:24:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:41.095 13:24:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:41.095 13:24:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:41.095 13:24:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:41.095 13:24:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.095 13:24:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:41.095 13:24:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.095 13:24:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:41.095 13:24:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.095 13:24:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:41.354 [ 00:14:41.354 { 00:14:41.354 "name": "BaseBdev3", 00:14:41.354 "aliases": [ 00:14:41.354 "8597acf4-b9de-4b7d-b17d-65bd91b03109" 00:14:41.354 ], 00:14:41.354 "product_name": "Malloc disk", 00:14:41.354 "block_size": 512, 00:14:41.354 "num_blocks": 65536, 00:14:41.354 "uuid": "8597acf4-b9de-4b7d-b17d-65bd91b03109", 00:14:41.354 "assigned_rate_limits": { 00:14:41.354 "rw_ios_per_sec": 0, 00:14:41.354 "rw_mbytes_per_sec": 0, 00:14:41.354 "r_mbytes_per_sec": 0, 00:14:41.354 "w_mbytes_per_sec": 0 00:14:41.354 }, 00:14:41.354 "claimed": false, 00:14:41.354 "zoned": false, 00:14:41.354 "supported_io_types": { 00:14:41.354 "read": true, 00:14:41.354 "write": true, 00:14:41.354 "unmap": true, 00:14:41.354 "flush": true, 00:14:41.354 "reset": true, 00:14:41.354 "nvme_admin": false, 00:14:41.354 "nvme_io": false, 00:14:41.354 "nvme_io_md": false, 00:14:41.354 "write_zeroes": true, 00:14:41.355 "zcopy": true, 00:14:41.355 "get_zone_info": false, 00:14:41.355 "zone_management": false, 00:14:41.355 "zone_append": false, 00:14:41.355 "compare": false, 00:14:41.355 "compare_and_write": false, 00:14:41.355 "abort": true, 00:14:41.355 "seek_hole": false, 00:14:41.355 "seek_data": false, 00:14:41.355 "copy": true, 00:14:41.355 "nvme_iov_md": false 00:14:41.355 }, 00:14:41.355 "memory_domains": [ 00:14:41.355 { 00:14:41.355 "dma_device_id": "system", 00:14:41.355 "dma_device_type": 1 00:14:41.355 }, 00:14:41.355 { 00:14:41.355 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:41.355 "dma_device_type": 2 00:14:41.355 } 00:14:41.355 ], 00:14:41.355 "driver_specific": {} 00:14:41.355 } 00:14:41.355 ] 00:14:41.355 13:24:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.355 13:24:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:41.355 13:24:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:41.355 13:24:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:41.355 13:24:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:41.355 13:24:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.355 13:24:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:41.355 [2024-11-17 13:24:30.336799] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:41.355 [2024-11-17 13:24:30.336887] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:41.355 [2024-11-17 13:24:30.336935] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:41.355 [2024-11-17 13:24:30.338803] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:41.355 13:24:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.355 13:24:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:41.355 13:24:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:41.355 13:24:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:41.355 13:24:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:41.355 13:24:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:41.355 13:24:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:41.355 13:24:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:41.355 13:24:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:41.355 13:24:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:41.355 13:24:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:41.355 13:24:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:41.355 13:24:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.355 13:24:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:41.355 13:24:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:41.355 13:24:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.355 13:24:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:41.355 "name": "Existed_Raid", 00:14:41.355 "uuid": "02401f9c-b831-4f7b-babf-f864d7d1977a", 00:14:41.355 "strip_size_kb": 64, 00:14:41.355 "state": "configuring", 00:14:41.355 "raid_level": "raid5f", 00:14:41.355 "superblock": true, 00:14:41.355 "num_base_bdevs": 3, 00:14:41.355 "num_base_bdevs_discovered": 2, 00:14:41.355 "num_base_bdevs_operational": 3, 00:14:41.355 "base_bdevs_list": [ 00:14:41.355 { 00:14:41.355 "name": "BaseBdev1", 00:14:41.355 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:41.355 "is_configured": false, 00:14:41.355 "data_offset": 0, 00:14:41.355 "data_size": 0 00:14:41.355 }, 00:14:41.355 { 00:14:41.355 "name": "BaseBdev2", 00:14:41.355 "uuid": "675937e9-f6f2-42b4-bbef-2f016365d695", 00:14:41.355 "is_configured": true, 00:14:41.355 "data_offset": 2048, 00:14:41.355 "data_size": 63488 00:14:41.355 }, 00:14:41.355 { 00:14:41.355 "name": "BaseBdev3", 00:14:41.355 "uuid": "8597acf4-b9de-4b7d-b17d-65bd91b03109", 00:14:41.355 "is_configured": true, 00:14:41.355 "data_offset": 2048, 00:14:41.355 "data_size": 63488 00:14:41.355 } 00:14:41.355 ] 00:14:41.355 }' 00:14:41.355 13:24:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:41.355 13:24:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:41.614 13:24:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:41.614 13:24:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.614 13:24:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:41.614 [2024-11-17 13:24:30.748160] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:41.614 13:24:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.614 13:24:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:41.614 13:24:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:41.614 13:24:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:41.614 13:24:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:41.614 13:24:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:41.614 13:24:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:41.614 13:24:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:41.614 13:24:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:41.614 13:24:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:41.614 13:24:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:41.614 13:24:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:41.614 13:24:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:41.614 13:24:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.614 13:24:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:41.614 13:24:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.614 13:24:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:41.614 "name": "Existed_Raid", 00:14:41.614 "uuid": "02401f9c-b831-4f7b-babf-f864d7d1977a", 00:14:41.614 "strip_size_kb": 64, 00:14:41.614 "state": "configuring", 00:14:41.614 "raid_level": "raid5f", 00:14:41.614 "superblock": true, 00:14:41.614 "num_base_bdevs": 3, 00:14:41.614 "num_base_bdevs_discovered": 1, 00:14:41.614 "num_base_bdevs_operational": 3, 00:14:41.614 "base_bdevs_list": [ 00:14:41.614 { 00:14:41.614 "name": "BaseBdev1", 00:14:41.614 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:41.614 "is_configured": false, 00:14:41.614 "data_offset": 0, 00:14:41.614 "data_size": 0 00:14:41.614 }, 00:14:41.614 { 00:14:41.614 "name": null, 00:14:41.614 "uuid": "675937e9-f6f2-42b4-bbef-2f016365d695", 00:14:41.614 "is_configured": false, 00:14:41.614 "data_offset": 0, 00:14:41.614 "data_size": 63488 00:14:41.614 }, 00:14:41.614 { 00:14:41.614 "name": "BaseBdev3", 00:14:41.614 "uuid": "8597acf4-b9de-4b7d-b17d-65bd91b03109", 00:14:41.614 "is_configured": true, 00:14:41.614 "data_offset": 2048, 00:14:41.614 "data_size": 63488 00:14:41.614 } 00:14:41.614 ] 00:14:41.614 }' 00:14:41.614 13:24:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:41.614 13:24:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:42.183 13:24:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:42.183 13:24:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:42.183 13:24:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.183 13:24:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:42.183 13:24:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.183 13:24:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:14:42.183 13:24:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:42.183 13:24:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.183 13:24:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:42.183 [2024-11-17 13:24:31.191341] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:42.183 BaseBdev1 00:14:42.183 13:24:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.183 13:24:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:14:42.183 13:24:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:42.183 13:24:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:42.183 13:24:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:42.183 13:24:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:42.183 13:24:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:42.183 13:24:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:42.183 13:24:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.183 13:24:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:42.183 13:24:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.183 13:24:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:42.183 13:24:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.183 13:24:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:42.183 [ 00:14:42.183 { 00:14:42.183 "name": "BaseBdev1", 00:14:42.183 "aliases": [ 00:14:42.183 "3d66c084-72dd-474a-aab6-0c836537d1c0" 00:14:42.183 ], 00:14:42.183 "product_name": "Malloc disk", 00:14:42.183 "block_size": 512, 00:14:42.183 "num_blocks": 65536, 00:14:42.183 "uuid": "3d66c084-72dd-474a-aab6-0c836537d1c0", 00:14:42.183 "assigned_rate_limits": { 00:14:42.183 "rw_ios_per_sec": 0, 00:14:42.183 "rw_mbytes_per_sec": 0, 00:14:42.183 "r_mbytes_per_sec": 0, 00:14:42.183 "w_mbytes_per_sec": 0 00:14:42.183 }, 00:14:42.183 "claimed": true, 00:14:42.183 "claim_type": "exclusive_write", 00:14:42.183 "zoned": false, 00:14:42.183 "supported_io_types": { 00:14:42.183 "read": true, 00:14:42.183 "write": true, 00:14:42.183 "unmap": true, 00:14:42.183 "flush": true, 00:14:42.183 "reset": true, 00:14:42.183 "nvme_admin": false, 00:14:42.183 "nvme_io": false, 00:14:42.183 "nvme_io_md": false, 00:14:42.183 "write_zeroes": true, 00:14:42.183 "zcopy": true, 00:14:42.183 "get_zone_info": false, 00:14:42.183 "zone_management": false, 00:14:42.183 "zone_append": false, 00:14:42.183 "compare": false, 00:14:42.183 "compare_and_write": false, 00:14:42.183 "abort": true, 00:14:42.183 "seek_hole": false, 00:14:42.183 "seek_data": false, 00:14:42.183 "copy": true, 00:14:42.183 "nvme_iov_md": false 00:14:42.183 }, 00:14:42.183 "memory_domains": [ 00:14:42.183 { 00:14:42.183 "dma_device_id": "system", 00:14:42.183 "dma_device_type": 1 00:14:42.183 }, 00:14:42.183 { 00:14:42.183 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:42.183 "dma_device_type": 2 00:14:42.183 } 00:14:42.183 ], 00:14:42.183 "driver_specific": {} 00:14:42.183 } 00:14:42.183 ] 00:14:42.183 13:24:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.183 13:24:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:42.183 13:24:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:42.183 13:24:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:42.183 13:24:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:42.183 13:24:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:42.183 13:24:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:42.183 13:24:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:42.183 13:24:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:42.183 13:24:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:42.183 13:24:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:42.183 13:24:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:42.184 13:24:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:42.184 13:24:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:42.184 13:24:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.184 13:24:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:42.184 13:24:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.184 13:24:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:42.184 "name": "Existed_Raid", 00:14:42.184 "uuid": "02401f9c-b831-4f7b-babf-f864d7d1977a", 00:14:42.184 "strip_size_kb": 64, 00:14:42.184 "state": "configuring", 00:14:42.184 "raid_level": "raid5f", 00:14:42.184 "superblock": true, 00:14:42.184 "num_base_bdevs": 3, 00:14:42.184 "num_base_bdevs_discovered": 2, 00:14:42.184 "num_base_bdevs_operational": 3, 00:14:42.184 "base_bdevs_list": [ 00:14:42.184 { 00:14:42.184 "name": "BaseBdev1", 00:14:42.184 "uuid": "3d66c084-72dd-474a-aab6-0c836537d1c0", 00:14:42.184 "is_configured": true, 00:14:42.184 "data_offset": 2048, 00:14:42.184 "data_size": 63488 00:14:42.184 }, 00:14:42.184 { 00:14:42.184 "name": null, 00:14:42.184 "uuid": "675937e9-f6f2-42b4-bbef-2f016365d695", 00:14:42.184 "is_configured": false, 00:14:42.184 "data_offset": 0, 00:14:42.184 "data_size": 63488 00:14:42.184 }, 00:14:42.184 { 00:14:42.184 "name": "BaseBdev3", 00:14:42.184 "uuid": "8597acf4-b9de-4b7d-b17d-65bd91b03109", 00:14:42.184 "is_configured": true, 00:14:42.184 "data_offset": 2048, 00:14:42.184 "data_size": 63488 00:14:42.184 } 00:14:42.184 ] 00:14:42.184 }' 00:14:42.184 13:24:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:42.184 13:24:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:42.443 13:24:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:42.443 13:24:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:42.443 13:24:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.443 13:24:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:42.443 13:24:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.702 13:24:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:14:42.702 13:24:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:14:42.702 13:24:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.702 13:24:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:42.702 [2024-11-17 13:24:31.682547] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:42.702 13:24:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.702 13:24:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:42.702 13:24:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:42.702 13:24:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:42.702 13:24:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:42.702 13:24:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:42.702 13:24:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:42.702 13:24:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:42.702 13:24:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:42.702 13:24:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:42.702 13:24:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:42.702 13:24:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:42.702 13:24:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.702 13:24:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:42.702 13:24:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:42.702 13:24:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.702 13:24:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:42.702 "name": "Existed_Raid", 00:14:42.702 "uuid": "02401f9c-b831-4f7b-babf-f864d7d1977a", 00:14:42.702 "strip_size_kb": 64, 00:14:42.702 "state": "configuring", 00:14:42.702 "raid_level": "raid5f", 00:14:42.702 "superblock": true, 00:14:42.702 "num_base_bdevs": 3, 00:14:42.702 "num_base_bdevs_discovered": 1, 00:14:42.702 "num_base_bdevs_operational": 3, 00:14:42.702 "base_bdevs_list": [ 00:14:42.702 { 00:14:42.702 "name": "BaseBdev1", 00:14:42.702 "uuid": "3d66c084-72dd-474a-aab6-0c836537d1c0", 00:14:42.702 "is_configured": true, 00:14:42.702 "data_offset": 2048, 00:14:42.702 "data_size": 63488 00:14:42.702 }, 00:14:42.702 { 00:14:42.702 "name": null, 00:14:42.703 "uuid": "675937e9-f6f2-42b4-bbef-2f016365d695", 00:14:42.703 "is_configured": false, 00:14:42.703 "data_offset": 0, 00:14:42.703 "data_size": 63488 00:14:42.703 }, 00:14:42.703 { 00:14:42.703 "name": null, 00:14:42.703 "uuid": "8597acf4-b9de-4b7d-b17d-65bd91b03109", 00:14:42.703 "is_configured": false, 00:14:42.703 "data_offset": 0, 00:14:42.703 "data_size": 63488 00:14:42.703 } 00:14:42.703 ] 00:14:42.703 }' 00:14:42.703 13:24:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:42.703 13:24:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:42.962 13:24:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:42.962 13:24:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:42.962 13:24:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.962 13:24:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:42.962 13:24:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.962 13:24:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:14:42.962 13:24:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:14:42.962 13:24:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.962 13:24:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:42.962 [2024-11-17 13:24:32.129915] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:42.962 13:24:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.962 13:24:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:42.962 13:24:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:42.962 13:24:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:42.962 13:24:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:42.962 13:24:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:42.962 13:24:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:42.962 13:24:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:42.962 13:24:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:42.962 13:24:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:42.962 13:24:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:42.962 13:24:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:42.962 13:24:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.962 13:24:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:42.962 13:24:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:42.962 13:24:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.223 13:24:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:43.223 "name": "Existed_Raid", 00:14:43.223 "uuid": "02401f9c-b831-4f7b-babf-f864d7d1977a", 00:14:43.223 "strip_size_kb": 64, 00:14:43.223 "state": "configuring", 00:14:43.223 "raid_level": "raid5f", 00:14:43.223 "superblock": true, 00:14:43.223 "num_base_bdevs": 3, 00:14:43.223 "num_base_bdevs_discovered": 2, 00:14:43.223 "num_base_bdevs_operational": 3, 00:14:43.223 "base_bdevs_list": [ 00:14:43.223 { 00:14:43.223 "name": "BaseBdev1", 00:14:43.223 "uuid": "3d66c084-72dd-474a-aab6-0c836537d1c0", 00:14:43.223 "is_configured": true, 00:14:43.223 "data_offset": 2048, 00:14:43.223 "data_size": 63488 00:14:43.223 }, 00:14:43.223 { 00:14:43.223 "name": null, 00:14:43.223 "uuid": "675937e9-f6f2-42b4-bbef-2f016365d695", 00:14:43.223 "is_configured": false, 00:14:43.223 "data_offset": 0, 00:14:43.223 "data_size": 63488 00:14:43.223 }, 00:14:43.223 { 00:14:43.223 "name": "BaseBdev3", 00:14:43.223 "uuid": "8597acf4-b9de-4b7d-b17d-65bd91b03109", 00:14:43.223 "is_configured": true, 00:14:43.223 "data_offset": 2048, 00:14:43.223 "data_size": 63488 00:14:43.223 } 00:14:43.223 ] 00:14:43.223 }' 00:14:43.223 13:24:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:43.223 13:24:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:43.483 13:24:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:43.483 13:24:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.483 13:24:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:43.483 13:24:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:43.483 13:24:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.483 13:24:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:14:43.483 13:24:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:43.483 13:24:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.483 13:24:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:43.483 [2024-11-17 13:24:32.605111] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:43.483 13:24:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.483 13:24:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:43.483 13:24:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:43.483 13:24:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:43.483 13:24:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:43.483 13:24:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:43.483 13:24:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:43.483 13:24:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:43.483 13:24:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:43.483 13:24:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:43.483 13:24:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:43.483 13:24:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:43.483 13:24:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:43.483 13:24:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.483 13:24:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:43.743 13:24:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.743 13:24:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:43.743 "name": "Existed_Raid", 00:14:43.743 "uuid": "02401f9c-b831-4f7b-babf-f864d7d1977a", 00:14:43.743 "strip_size_kb": 64, 00:14:43.743 "state": "configuring", 00:14:43.743 "raid_level": "raid5f", 00:14:43.743 "superblock": true, 00:14:43.743 "num_base_bdevs": 3, 00:14:43.743 "num_base_bdevs_discovered": 1, 00:14:43.743 "num_base_bdevs_operational": 3, 00:14:43.743 "base_bdevs_list": [ 00:14:43.743 { 00:14:43.743 "name": null, 00:14:43.743 "uuid": "3d66c084-72dd-474a-aab6-0c836537d1c0", 00:14:43.743 "is_configured": false, 00:14:43.743 "data_offset": 0, 00:14:43.743 "data_size": 63488 00:14:43.743 }, 00:14:43.743 { 00:14:43.743 "name": null, 00:14:43.743 "uuid": "675937e9-f6f2-42b4-bbef-2f016365d695", 00:14:43.743 "is_configured": false, 00:14:43.743 "data_offset": 0, 00:14:43.743 "data_size": 63488 00:14:43.743 }, 00:14:43.743 { 00:14:43.743 "name": "BaseBdev3", 00:14:43.743 "uuid": "8597acf4-b9de-4b7d-b17d-65bd91b03109", 00:14:43.743 "is_configured": true, 00:14:43.743 "data_offset": 2048, 00:14:43.743 "data_size": 63488 00:14:43.743 } 00:14:43.743 ] 00:14:43.743 }' 00:14:43.743 13:24:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:43.743 13:24:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:44.003 13:24:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:44.003 13:24:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:44.003 13:24:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.003 13:24:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:44.003 13:24:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.003 13:24:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:14:44.003 13:24:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:14:44.003 13:24:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.003 13:24:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:44.003 [2024-11-17 13:24:33.142263] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:44.003 13:24:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.003 13:24:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:44.003 13:24:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:44.003 13:24:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:44.003 13:24:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:44.003 13:24:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:44.003 13:24:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:44.003 13:24:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:44.003 13:24:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:44.003 13:24:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:44.003 13:24:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:44.003 13:24:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:44.003 13:24:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:44.003 13:24:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.003 13:24:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:44.003 13:24:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.003 13:24:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:44.003 "name": "Existed_Raid", 00:14:44.003 "uuid": "02401f9c-b831-4f7b-babf-f864d7d1977a", 00:14:44.003 "strip_size_kb": 64, 00:14:44.003 "state": "configuring", 00:14:44.003 "raid_level": "raid5f", 00:14:44.003 "superblock": true, 00:14:44.003 "num_base_bdevs": 3, 00:14:44.003 "num_base_bdevs_discovered": 2, 00:14:44.003 "num_base_bdevs_operational": 3, 00:14:44.003 "base_bdevs_list": [ 00:14:44.003 { 00:14:44.003 "name": null, 00:14:44.003 "uuid": "3d66c084-72dd-474a-aab6-0c836537d1c0", 00:14:44.003 "is_configured": false, 00:14:44.003 "data_offset": 0, 00:14:44.003 "data_size": 63488 00:14:44.003 }, 00:14:44.003 { 00:14:44.003 "name": "BaseBdev2", 00:14:44.003 "uuid": "675937e9-f6f2-42b4-bbef-2f016365d695", 00:14:44.003 "is_configured": true, 00:14:44.003 "data_offset": 2048, 00:14:44.003 "data_size": 63488 00:14:44.003 }, 00:14:44.003 { 00:14:44.003 "name": "BaseBdev3", 00:14:44.003 "uuid": "8597acf4-b9de-4b7d-b17d-65bd91b03109", 00:14:44.003 "is_configured": true, 00:14:44.003 "data_offset": 2048, 00:14:44.003 "data_size": 63488 00:14:44.003 } 00:14:44.003 ] 00:14:44.003 }' 00:14:44.003 13:24:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:44.003 13:24:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:44.574 13:24:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:44.574 13:24:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:44.574 13:24:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.574 13:24:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:44.574 13:24:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.574 13:24:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:14:44.574 13:24:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:44.574 13:24:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.574 13:24:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:44.574 13:24:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:14:44.574 13:24:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.574 13:24:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 3d66c084-72dd-474a-aab6-0c836537d1c0 00:14:44.574 13:24:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.574 13:24:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:44.574 [2024-11-17 13:24:33.678243] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:14:44.574 [2024-11-17 13:24:33.678445] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:14:44.574 [2024-11-17 13:24:33.678461] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:44.574 [2024-11-17 13:24:33.678731] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:44.574 NewBaseBdev 00:14:44.574 13:24:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.574 13:24:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:14:44.574 13:24:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:14:44.574 13:24:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:44.574 13:24:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:44.574 13:24:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:44.574 13:24:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:44.574 13:24:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:44.574 13:24:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.574 13:24:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:44.574 [2024-11-17 13:24:33.684143] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:14:44.574 [2024-11-17 13:24:33.684216] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:14:44.574 [2024-11-17 13:24:33.684417] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:44.574 13:24:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.574 13:24:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:14:44.574 13:24:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.574 13:24:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:44.574 [ 00:14:44.574 { 00:14:44.574 "name": "NewBaseBdev", 00:14:44.574 "aliases": [ 00:14:44.574 "3d66c084-72dd-474a-aab6-0c836537d1c0" 00:14:44.574 ], 00:14:44.574 "product_name": "Malloc disk", 00:14:44.574 "block_size": 512, 00:14:44.574 "num_blocks": 65536, 00:14:44.574 "uuid": "3d66c084-72dd-474a-aab6-0c836537d1c0", 00:14:44.574 "assigned_rate_limits": { 00:14:44.574 "rw_ios_per_sec": 0, 00:14:44.574 "rw_mbytes_per_sec": 0, 00:14:44.574 "r_mbytes_per_sec": 0, 00:14:44.574 "w_mbytes_per_sec": 0 00:14:44.574 }, 00:14:44.574 "claimed": true, 00:14:44.574 "claim_type": "exclusive_write", 00:14:44.574 "zoned": false, 00:14:44.574 "supported_io_types": { 00:14:44.574 "read": true, 00:14:44.574 "write": true, 00:14:44.574 "unmap": true, 00:14:44.574 "flush": true, 00:14:44.574 "reset": true, 00:14:44.574 "nvme_admin": false, 00:14:44.574 "nvme_io": false, 00:14:44.574 "nvme_io_md": false, 00:14:44.574 "write_zeroes": true, 00:14:44.574 "zcopy": true, 00:14:44.574 "get_zone_info": false, 00:14:44.574 "zone_management": false, 00:14:44.574 "zone_append": false, 00:14:44.574 "compare": false, 00:14:44.574 "compare_and_write": false, 00:14:44.574 "abort": true, 00:14:44.574 "seek_hole": false, 00:14:44.574 "seek_data": false, 00:14:44.574 "copy": true, 00:14:44.574 "nvme_iov_md": false 00:14:44.574 }, 00:14:44.574 "memory_domains": [ 00:14:44.574 { 00:14:44.574 "dma_device_id": "system", 00:14:44.574 "dma_device_type": 1 00:14:44.574 }, 00:14:44.574 { 00:14:44.574 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:44.574 "dma_device_type": 2 00:14:44.574 } 00:14:44.574 ], 00:14:44.574 "driver_specific": {} 00:14:44.574 } 00:14:44.574 ] 00:14:44.574 13:24:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.575 13:24:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:44.575 13:24:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:14:44.575 13:24:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:44.575 13:24:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:44.575 13:24:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:44.575 13:24:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:44.575 13:24:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:44.575 13:24:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:44.575 13:24:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:44.575 13:24:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:44.575 13:24:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:44.575 13:24:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:44.575 13:24:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:44.575 13:24:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.575 13:24:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:44.575 13:24:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.575 13:24:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:44.575 "name": "Existed_Raid", 00:14:44.575 "uuid": "02401f9c-b831-4f7b-babf-f864d7d1977a", 00:14:44.575 "strip_size_kb": 64, 00:14:44.575 "state": "online", 00:14:44.575 "raid_level": "raid5f", 00:14:44.575 "superblock": true, 00:14:44.575 "num_base_bdevs": 3, 00:14:44.575 "num_base_bdevs_discovered": 3, 00:14:44.575 "num_base_bdevs_operational": 3, 00:14:44.575 "base_bdevs_list": [ 00:14:44.575 { 00:14:44.575 "name": "NewBaseBdev", 00:14:44.575 "uuid": "3d66c084-72dd-474a-aab6-0c836537d1c0", 00:14:44.575 "is_configured": true, 00:14:44.575 "data_offset": 2048, 00:14:44.575 "data_size": 63488 00:14:44.575 }, 00:14:44.575 { 00:14:44.575 "name": "BaseBdev2", 00:14:44.575 "uuid": "675937e9-f6f2-42b4-bbef-2f016365d695", 00:14:44.575 "is_configured": true, 00:14:44.575 "data_offset": 2048, 00:14:44.575 "data_size": 63488 00:14:44.575 }, 00:14:44.575 { 00:14:44.575 "name": "BaseBdev3", 00:14:44.575 "uuid": "8597acf4-b9de-4b7d-b17d-65bd91b03109", 00:14:44.575 "is_configured": true, 00:14:44.575 "data_offset": 2048, 00:14:44.575 "data_size": 63488 00:14:44.575 } 00:14:44.575 ] 00:14:44.575 }' 00:14:44.575 13:24:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:44.575 13:24:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.144 13:24:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:14:45.144 13:24:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:45.144 13:24:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:45.144 13:24:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:45.144 13:24:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:14:45.144 13:24:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:45.144 13:24:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:45.144 13:24:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:45.144 13:24:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.144 13:24:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.144 [2024-11-17 13:24:34.122362] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:45.144 13:24:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.144 13:24:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:45.144 "name": "Existed_Raid", 00:14:45.144 "aliases": [ 00:14:45.144 "02401f9c-b831-4f7b-babf-f864d7d1977a" 00:14:45.144 ], 00:14:45.144 "product_name": "Raid Volume", 00:14:45.144 "block_size": 512, 00:14:45.144 "num_blocks": 126976, 00:14:45.144 "uuid": "02401f9c-b831-4f7b-babf-f864d7d1977a", 00:14:45.144 "assigned_rate_limits": { 00:14:45.144 "rw_ios_per_sec": 0, 00:14:45.144 "rw_mbytes_per_sec": 0, 00:14:45.144 "r_mbytes_per_sec": 0, 00:14:45.144 "w_mbytes_per_sec": 0 00:14:45.144 }, 00:14:45.144 "claimed": false, 00:14:45.144 "zoned": false, 00:14:45.144 "supported_io_types": { 00:14:45.144 "read": true, 00:14:45.144 "write": true, 00:14:45.144 "unmap": false, 00:14:45.144 "flush": false, 00:14:45.144 "reset": true, 00:14:45.144 "nvme_admin": false, 00:14:45.144 "nvme_io": false, 00:14:45.144 "nvme_io_md": false, 00:14:45.144 "write_zeroes": true, 00:14:45.144 "zcopy": false, 00:14:45.144 "get_zone_info": false, 00:14:45.144 "zone_management": false, 00:14:45.144 "zone_append": false, 00:14:45.144 "compare": false, 00:14:45.144 "compare_and_write": false, 00:14:45.144 "abort": false, 00:14:45.144 "seek_hole": false, 00:14:45.144 "seek_data": false, 00:14:45.144 "copy": false, 00:14:45.144 "nvme_iov_md": false 00:14:45.144 }, 00:14:45.144 "driver_specific": { 00:14:45.144 "raid": { 00:14:45.144 "uuid": "02401f9c-b831-4f7b-babf-f864d7d1977a", 00:14:45.144 "strip_size_kb": 64, 00:14:45.144 "state": "online", 00:14:45.144 "raid_level": "raid5f", 00:14:45.144 "superblock": true, 00:14:45.144 "num_base_bdevs": 3, 00:14:45.144 "num_base_bdevs_discovered": 3, 00:14:45.144 "num_base_bdevs_operational": 3, 00:14:45.144 "base_bdevs_list": [ 00:14:45.144 { 00:14:45.144 "name": "NewBaseBdev", 00:14:45.144 "uuid": "3d66c084-72dd-474a-aab6-0c836537d1c0", 00:14:45.144 "is_configured": true, 00:14:45.144 "data_offset": 2048, 00:14:45.144 "data_size": 63488 00:14:45.144 }, 00:14:45.144 { 00:14:45.144 "name": "BaseBdev2", 00:14:45.144 "uuid": "675937e9-f6f2-42b4-bbef-2f016365d695", 00:14:45.144 "is_configured": true, 00:14:45.144 "data_offset": 2048, 00:14:45.144 "data_size": 63488 00:14:45.144 }, 00:14:45.144 { 00:14:45.144 "name": "BaseBdev3", 00:14:45.144 "uuid": "8597acf4-b9de-4b7d-b17d-65bd91b03109", 00:14:45.145 "is_configured": true, 00:14:45.145 "data_offset": 2048, 00:14:45.145 "data_size": 63488 00:14:45.145 } 00:14:45.145 ] 00:14:45.145 } 00:14:45.145 } 00:14:45.145 }' 00:14:45.145 13:24:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:45.145 13:24:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:14:45.145 BaseBdev2 00:14:45.145 BaseBdev3' 00:14:45.145 13:24:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:45.145 13:24:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:45.145 13:24:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:45.145 13:24:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:14:45.145 13:24:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.145 13:24:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.145 13:24:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:45.145 13:24:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.145 13:24:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:45.145 13:24:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:45.145 13:24:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:45.145 13:24:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:45.145 13:24:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.145 13:24:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.145 13:24:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:45.145 13:24:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.145 13:24:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:45.145 13:24:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:45.145 13:24:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:45.145 13:24:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:45.145 13:24:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.145 13:24:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.145 13:24:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:45.145 13:24:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.404 13:24:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:45.404 13:24:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:45.404 13:24:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:45.404 13:24:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.404 13:24:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.404 [2024-11-17 13:24:34.381689] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:45.404 [2024-11-17 13:24:34.381718] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:45.404 [2024-11-17 13:24:34.381811] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:45.404 [2024-11-17 13:24:34.382161] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:45.404 [2024-11-17 13:24:34.382178] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:14:45.404 13:24:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.404 13:24:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 80403 00:14:45.404 13:24:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 80403 ']' 00:14:45.404 13:24:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 80403 00:14:45.404 13:24:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:14:45.404 13:24:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:45.404 13:24:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80403 00:14:45.404 killing process with pid 80403 00:14:45.404 13:24:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:45.404 13:24:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:45.404 13:24:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80403' 00:14:45.404 13:24:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 80403 00:14:45.404 [2024-11-17 13:24:34.422838] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:45.404 13:24:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 80403 00:14:45.663 [2024-11-17 13:24:34.737750] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:47.044 13:24:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:14:47.044 00:14:47.044 real 0m9.916s 00:14:47.044 user 0m15.471s 00:14:47.044 sys 0m1.759s 00:14:47.044 ************************************ 00:14:47.044 END TEST raid5f_state_function_test_sb 00:14:47.044 ************************************ 00:14:47.044 13:24:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:47.044 13:24:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:47.044 13:24:35 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 3 00:14:47.044 13:24:35 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:14:47.044 13:24:35 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:47.044 13:24:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:47.044 ************************************ 00:14:47.044 START TEST raid5f_superblock_test 00:14:47.044 ************************************ 00:14:47.044 13:24:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid5f 3 00:14:47.044 13:24:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:14:47.044 13:24:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:14:47.044 13:24:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:14:47.044 13:24:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:14:47.044 13:24:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:14:47.044 13:24:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:14:47.044 13:24:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:14:47.044 13:24:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:14:47.044 13:24:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:14:47.044 13:24:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:14:47.044 13:24:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:14:47.044 13:24:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:14:47.044 13:24:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:14:47.044 13:24:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:14:47.044 13:24:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:14:47.044 13:24:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:14:47.044 13:24:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=81012 00:14:47.044 13:24:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 81012 00:14:47.044 13:24:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:14:47.044 13:24:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 81012 ']' 00:14:47.044 13:24:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:47.044 13:24:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:47.044 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:47.044 13:24:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:47.044 13:24:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:47.044 13:24:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.044 [2024-11-17 13:24:36.042591] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:14:47.044 [2024-11-17 13:24:36.042843] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81012 ] 00:14:47.044 [2024-11-17 13:24:36.221599] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:47.303 [2024-11-17 13:24:36.330013] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:47.562 [2024-11-17 13:24:36.529259] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:47.562 [2024-11-17 13:24:36.529290] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:47.868 13:24:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:47.868 13:24:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:14:47.868 13:24:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:14:47.868 13:24:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:47.868 13:24:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:14:47.868 13:24:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:14:47.868 13:24:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:14:47.868 13:24:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:47.868 13:24:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:47.868 13:24:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:47.868 13:24:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:14:47.868 13:24:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.868 13:24:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.868 malloc1 00:14:47.868 13:24:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.868 13:24:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:47.868 13:24:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.868 13:24:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.868 [2024-11-17 13:24:36.885668] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:47.868 [2024-11-17 13:24:36.885796] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:47.868 [2024-11-17 13:24:36.885842] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:47.868 [2024-11-17 13:24:36.885873] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:47.868 [2024-11-17 13:24:36.887982] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:47.868 [2024-11-17 13:24:36.888069] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:47.868 pt1 00:14:47.868 13:24:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.868 13:24:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:47.868 13:24:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:47.868 13:24:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:14:47.868 13:24:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:14:47.868 13:24:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:14:47.868 13:24:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:47.868 13:24:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:47.868 13:24:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:47.868 13:24:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:14:47.868 13:24:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.868 13:24:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.868 malloc2 00:14:47.868 13:24:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.868 13:24:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:47.869 13:24:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.869 13:24:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.869 [2024-11-17 13:24:36.942956] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:47.869 [2024-11-17 13:24:36.943055] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:47.869 [2024-11-17 13:24:36.943096] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:47.869 [2024-11-17 13:24:36.943127] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:47.869 [2024-11-17 13:24:36.945161] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:47.869 [2024-11-17 13:24:36.945243] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:47.869 pt2 00:14:47.869 13:24:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.869 13:24:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:47.869 13:24:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:47.869 13:24:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:14:47.869 13:24:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:14:47.869 13:24:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:14:47.869 13:24:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:47.869 13:24:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:47.869 13:24:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:47.869 13:24:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:14:47.869 13:24:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.869 13:24:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.869 malloc3 00:14:47.869 13:24:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.869 13:24:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:47.869 13:24:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.869 13:24:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.869 [2024-11-17 13:24:37.012474] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:47.869 [2024-11-17 13:24:37.012581] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:47.869 [2024-11-17 13:24:37.012620] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:47.869 [2024-11-17 13:24:37.012648] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:47.869 [2024-11-17 13:24:37.014672] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:47.869 [2024-11-17 13:24:37.014742] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:47.869 pt3 00:14:47.869 13:24:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.869 13:24:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:47.869 13:24:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:47.869 13:24:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:14:47.869 13:24:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.869 13:24:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.869 [2024-11-17 13:24:37.024520] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:47.869 [2024-11-17 13:24:37.026404] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:47.869 [2024-11-17 13:24:37.026535] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:47.869 [2024-11-17 13:24:37.026834] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:47.869 [2024-11-17 13:24:37.026918] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:47.869 [2024-11-17 13:24:37.027280] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:47.869 [2024-11-17 13:24:37.033665] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:47.869 [2024-11-17 13:24:37.033722] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:47.869 [2024-11-17 13:24:37.033982] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:47.869 13:24:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.869 13:24:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:47.869 13:24:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:47.869 13:24:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:47.869 13:24:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:47.869 13:24:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:47.869 13:24:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:47.869 13:24:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:47.869 13:24:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:47.869 13:24:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:47.869 13:24:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:47.869 13:24:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:47.869 13:24:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:47.869 13:24:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.869 13:24:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.129 13:24:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.129 13:24:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:48.129 "name": "raid_bdev1", 00:14:48.129 "uuid": "bc07c27d-39a2-4e1f-bf64-73e56a81921c", 00:14:48.129 "strip_size_kb": 64, 00:14:48.129 "state": "online", 00:14:48.129 "raid_level": "raid5f", 00:14:48.129 "superblock": true, 00:14:48.129 "num_base_bdevs": 3, 00:14:48.129 "num_base_bdevs_discovered": 3, 00:14:48.129 "num_base_bdevs_operational": 3, 00:14:48.129 "base_bdevs_list": [ 00:14:48.129 { 00:14:48.129 "name": "pt1", 00:14:48.129 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:48.129 "is_configured": true, 00:14:48.129 "data_offset": 2048, 00:14:48.129 "data_size": 63488 00:14:48.129 }, 00:14:48.129 { 00:14:48.129 "name": "pt2", 00:14:48.129 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:48.129 "is_configured": true, 00:14:48.129 "data_offset": 2048, 00:14:48.129 "data_size": 63488 00:14:48.129 }, 00:14:48.129 { 00:14:48.129 "name": "pt3", 00:14:48.129 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:48.129 "is_configured": true, 00:14:48.129 "data_offset": 2048, 00:14:48.129 "data_size": 63488 00:14:48.129 } 00:14:48.129 ] 00:14:48.129 }' 00:14:48.129 13:24:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:48.129 13:24:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.388 13:24:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:14:48.388 13:24:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:14:48.388 13:24:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:48.388 13:24:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:48.388 13:24:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:48.388 13:24:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:48.388 13:24:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:48.388 13:24:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:48.388 13:24:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.388 13:24:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.388 [2024-11-17 13:24:37.460196] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:48.388 13:24:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.388 13:24:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:48.388 "name": "raid_bdev1", 00:14:48.388 "aliases": [ 00:14:48.388 "bc07c27d-39a2-4e1f-bf64-73e56a81921c" 00:14:48.388 ], 00:14:48.388 "product_name": "Raid Volume", 00:14:48.388 "block_size": 512, 00:14:48.388 "num_blocks": 126976, 00:14:48.388 "uuid": "bc07c27d-39a2-4e1f-bf64-73e56a81921c", 00:14:48.388 "assigned_rate_limits": { 00:14:48.388 "rw_ios_per_sec": 0, 00:14:48.389 "rw_mbytes_per_sec": 0, 00:14:48.389 "r_mbytes_per_sec": 0, 00:14:48.389 "w_mbytes_per_sec": 0 00:14:48.389 }, 00:14:48.389 "claimed": false, 00:14:48.389 "zoned": false, 00:14:48.389 "supported_io_types": { 00:14:48.389 "read": true, 00:14:48.389 "write": true, 00:14:48.389 "unmap": false, 00:14:48.389 "flush": false, 00:14:48.389 "reset": true, 00:14:48.389 "nvme_admin": false, 00:14:48.389 "nvme_io": false, 00:14:48.389 "nvme_io_md": false, 00:14:48.389 "write_zeroes": true, 00:14:48.389 "zcopy": false, 00:14:48.389 "get_zone_info": false, 00:14:48.389 "zone_management": false, 00:14:48.389 "zone_append": false, 00:14:48.389 "compare": false, 00:14:48.389 "compare_and_write": false, 00:14:48.389 "abort": false, 00:14:48.389 "seek_hole": false, 00:14:48.389 "seek_data": false, 00:14:48.389 "copy": false, 00:14:48.389 "nvme_iov_md": false 00:14:48.389 }, 00:14:48.389 "driver_specific": { 00:14:48.389 "raid": { 00:14:48.389 "uuid": "bc07c27d-39a2-4e1f-bf64-73e56a81921c", 00:14:48.389 "strip_size_kb": 64, 00:14:48.389 "state": "online", 00:14:48.389 "raid_level": "raid5f", 00:14:48.389 "superblock": true, 00:14:48.389 "num_base_bdevs": 3, 00:14:48.389 "num_base_bdevs_discovered": 3, 00:14:48.389 "num_base_bdevs_operational": 3, 00:14:48.389 "base_bdevs_list": [ 00:14:48.389 { 00:14:48.389 "name": "pt1", 00:14:48.389 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:48.389 "is_configured": true, 00:14:48.389 "data_offset": 2048, 00:14:48.389 "data_size": 63488 00:14:48.389 }, 00:14:48.389 { 00:14:48.389 "name": "pt2", 00:14:48.389 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:48.389 "is_configured": true, 00:14:48.389 "data_offset": 2048, 00:14:48.389 "data_size": 63488 00:14:48.389 }, 00:14:48.389 { 00:14:48.389 "name": "pt3", 00:14:48.389 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:48.389 "is_configured": true, 00:14:48.389 "data_offset": 2048, 00:14:48.389 "data_size": 63488 00:14:48.389 } 00:14:48.389 ] 00:14:48.389 } 00:14:48.389 } 00:14:48.389 }' 00:14:48.389 13:24:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:48.389 13:24:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:14:48.389 pt2 00:14:48.389 pt3' 00:14:48.389 13:24:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:48.389 13:24:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:48.389 13:24:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:48.389 13:24:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:14:48.389 13:24:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.389 13:24:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.389 13:24:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:48.389 13:24:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.389 13:24:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:48.389 13:24:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:48.389 13:24:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:48.389 13:24:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:14:48.389 13:24:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.389 13:24:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.648 13:24:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:48.648 13:24:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.648 13:24:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:48.648 13:24:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:48.648 13:24:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:48.648 13:24:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:14:48.648 13:24:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.648 13:24:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.648 13:24:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:48.648 13:24:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.648 13:24:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:48.648 13:24:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:48.648 13:24:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:14:48.648 13:24:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:48.648 13:24:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.648 13:24:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.648 [2024-11-17 13:24:37.683738] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:48.648 13:24:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.648 13:24:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=bc07c27d-39a2-4e1f-bf64-73e56a81921c 00:14:48.648 13:24:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z bc07c27d-39a2-4e1f-bf64-73e56a81921c ']' 00:14:48.648 13:24:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:48.648 13:24:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.648 13:24:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.648 [2024-11-17 13:24:37.727479] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:48.648 [2024-11-17 13:24:37.727506] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:48.648 [2024-11-17 13:24:37.727576] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:48.648 [2024-11-17 13:24:37.727649] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:48.648 [2024-11-17 13:24:37.727659] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:48.648 13:24:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.648 13:24:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:48.648 13:24:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.648 13:24:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.648 13:24:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:14:48.648 13:24:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.648 13:24:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:14:48.648 13:24:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:14:48.648 13:24:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:48.648 13:24:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:14:48.648 13:24:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.648 13:24:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.648 13:24:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.648 13:24:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:48.648 13:24:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:14:48.648 13:24:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.648 13:24:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.648 13:24:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.648 13:24:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:48.648 13:24:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:14:48.648 13:24:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.648 13:24:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.648 13:24:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.648 13:24:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:14:48.648 13:24:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.648 13:24:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.648 13:24:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:14:48.648 13:24:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.648 13:24:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:14:48.648 13:24:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:14:48.648 13:24:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:14:48.648 13:24:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:14:48.648 13:24:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:14:48.648 13:24:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:48.649 13:24:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:14:48.649 13:24:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:48.649 13:24:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:14:48.649 13:24:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.649 13:24:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.649 [2024-11-17 13:24:37.859378] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:14:48.649 [2024-11-17 13:24:37.861344] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:14:48.649 [2024-11-17 13:24:37.861443] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:14:48.649 [2024-11-17 13:24:37.861517] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:14:48.649 [2024-11-17 13:24:37.861645] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:14:48.649 [2024-11-17 13:24:37.861706] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:14:48.649 [2024-11-17 13:24:37.861772] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:48.649 [2024-11-17 13:24:37.861808] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:14:48.649 request: 00:14:48.649 { 00:14:48.649 "name": "raid_bdev1", 00:14:48.649 "raid_level": "raid5f", 00:14:48.649 "base_bdevs": [ 00:14:48.649 "malloc1", 00:14:48.649 "malloc2", 00:14:48.649 "malloc3" 00:14:48.649 ], 00:14:48.649 "strip_size_kb": 64, 00:14:48.649 "superblock": false, 00:14:48.649 "method": "bdev_raid_create", 00:14:48.649 "req_id": 1 00:14:48.649 } 00:14:48.649 Got JSON-RPC error response 00:14:48.649 response: 00:14:48.649 { 00:14:48.649 "code": -17, 00:14:48.649 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:14:48.649 } 00:14:48.649 13:24:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:14:48.649 13:24:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:14:48.649 13:24:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:48.649 13:24:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:48.649 13:24:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:48.908 13:24:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:14:48.908 13:24:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:48.908 13:24:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.908 13:24:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.908 13:24:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.908 13:24:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:14:48.908 13:24:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:14:48.908 13:24:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:48.908 13:24:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.908 13:24:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.908 [2024-11-17 13:24:37.903243] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:48.908 [2024-11-17 13:24:37.903431] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:48.908 [2024-11-17 13:24:37.903457] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:48.908 [2024-11-17 13:24:37.903470] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:48.908 [2024-11-17 13:24:37.905666] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:48.908 [2024-11-17 13:24:37.905702] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:48.908 [2024-11-17 13:24:37.905779] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:14:48.908 [2024-11-17 13:24:37.905832] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:48.908 pt1 00:14:48.908 13:24:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.908 13:24:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:14:48.908 13:24:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:48.908 13:24:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:48.908 13:24:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:48.908 13:24:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:48.908 13:24:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:48.908 13:24:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:48.908 13:24:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:48.908 13:24:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:48.908 13:24:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:48.908 13:24:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:48.908 13:24:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.908 13:24:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.908 13:24:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:48.908 13:24:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.908 13:24:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:48.908 "name": "raid_bdev1", 00:14:48.908 "uuid": "bc07c27d-39a2-4e1f-bf64-73e56a81921c", 00:14:48.908 "strip_size_kb": 64, 00:14:48.908 "state": "configuring", 00:14:48.908 "raid_level": "raid5f", 00:14:48.908 "superblock": true, 00:14:48.908 "num_base_bdevs": 3, 00:14:48.908 "num_base_bdevs_discovered": 1, 00:14:48.908 "num_base_bdevs_operational": 3, 00:14:48.908 "base_bdevs_list": [ 00:14:48.908 { 00:14:48.908 "name": "pt1", 00:14:48.908 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:48.908 "is_configured": true, 00:14:48.908 "data_offset": 2048, 00:14:48.908 "data_size": 63488 00:14:48.908 }, 00:14:48.908 { 00:14:48.908 "name": null, 00:14:48.908 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:48.908 "is_configured": false, 00:14:48.908 "data_offset": 2048, 00:14:48.908 "data_size": 63488 00:14:48.908 }, 00:14:48.908 { 00:14:48.908 "name": null, 00:14:48.908 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:48.908 "is_configured": false, 00:14:48.908 "data_offset": 2048, 00:14:48.908 "data_size": 63488 00:14:48.908 } 00:14:48.908 ] 00:14:48.908 }' 00:14:48.908 13:24:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:48.908 13:24:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.168 13:24:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:14:49.168 13:24:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:49.168 13:24:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.168 13:24:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.168 [2024-11-17 13:24:38.314547] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:49.168 [2024-11-17 13:24:38.314656] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:49.168 [2024-11-17 13:24:38.314696] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:14:49.168 [2024-11-17 13:24:38.314724] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:49.168 [2024-11-17 13:24:38.315229] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:49.168 [2024-11-17 13:24:38.315293] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:49.168 [2024-11-17 13:24:38.315425] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:49.168 [2024-11-17 13:24:38.315476] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:49.168 pt2 00:14:49.168 13:24:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.168 13:24:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:14:49.168 13:24:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.168 13:24:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.168 [2024-11-17 13:24:38.326517] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:14:49.168 13:24:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.168 13:24:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:14:49.168 13:24:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:49.168 13:24:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:49.168 13:24:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:49.168 13:24:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:49.168 13:24:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:49.168 13:24:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:49.168 13:24:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:49.168 13:24:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:49.168 13:24:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:49.168 13:24:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:49.168 13:24:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.168 13:24:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:49.168 13:24:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.168 13:24:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.168 13:24:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:49.168 "name": "raid_bdev1", 00:14:49.168 "uuid": "bc07c27d-39a2-4e1f-bf64-73e56a81921c", 00:14:49.168 "strip_size_kb": 64, 00:14:49.168 "state": "configuring", 00:14:49.168 "raid_level": "raid5f", 00:14:49.168 "superblock": true, 00:14:49.168 "num_base_bdevs": 3, 00:14:49.168 "num_base_bdevs_discovered": 1, 00:14:49.168 "num_base_bdevs_operational": 3, 00:14:49.168 "base_bdevs_list": [ 00:14:49.168 { 00:14:49.168 "name": "pt1", 00:14:49.168 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:49.168 "is_configured": true, 00:14:49.168 "data_offset": 2048, 00:14:49.168 "data_size": 63488 00:14:49.168 }, 00:14:49.168 { 00:14:49.168 "name": null, 00:14:49.168 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:49.168 "is_configured": false, 00:14:49.168 "data_offset": 0, 00:14:49.168 "data_size": 63488 00:14:49.168 }, 00:14:49.168 { 00:14:49.168 "name": null, 00:14:49.169 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:49.169 "is_configured": false, 00:14:49.169 "data_offset": 2048, 00:14:49.169 "data_size": 63488 00:14:49.169 } 00:14:49.169 ] 00:14:49.169 }' 00:14:49.169 13:24:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:49.169 13:24:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.739 13:24:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:14:49.739 13:24:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:49.739 13:24:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:49.739 13:24:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.739 13:24:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.739 [2024-11-17 13:24:38.697899] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:49.739 [2024-11-17 13:24:38.697971] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:49.739 [2024-11-17 13:24:38.697997] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:14:49.739 [2024-11-17 13:24:38.698008] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:49.739 [2024-11-17 13:24:38.698474] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:49.739 [2024-11-17 13:24:38.698496] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:49.739 [2024-11-17 13:24:38.698574] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:49.739 [2024-11-17 13:24:38.698598] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:49.739 pt2 00:14:49.739 13:24:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.739 13:24:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:49.739 13:24:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:49.739 13:24:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:49.739 13:24:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.739 13:24:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.739 [2024-11-17 13:24:38.709857] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:49.739 [2024-11-17 13:24:38.709907] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:49.739 [2024-11-17 13:24:38.709939] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:14:49.739 [2024-11-17 13:24:38.709948] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:49.739 [2024-11-17 13:24:38.710328] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:49.739 [2024-11-17 13:24:38.710350] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:49.739 [2024-11-17 13:24:38.710410] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:14:49.739 [2024-11-17 13:24:38.710430] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:49.739 [2024-11-17 13:24:38.710555] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:49.739 [2024-11-17 13:24:38.710566] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:49.739 [2024-11-17 13:24:38.710791] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:14:49.739 [2024-11-17 13:24:38.716133] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:49.739 [2024-11-17 13:24:38.716155] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:14:49.739 [2024-11-17 13:24:38.716338] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:49.739 pt3 00:14:49.739 13:24:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.739 13:24:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:49.739 13:24:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:49.739 13:24:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:49.739 13:24:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:49.739 13:24:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:49.739 13:24:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:49.739 13:24:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:49.739 13:24:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:49.739 13:24:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:49.739 13:24:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:49.739 13:24:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:49.739 13:24:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:49.739 13:24:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:49.739 13:24:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:49.739 13:24:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.739 13:24:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.739 13:24:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.739 13:24:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:49.739 "name": "raid_bdev1", 00:14:49.739 "uuid": "bc07c27d-39a2-4e1f-bf64-73e56a81921c", 00:14:49.739 "strip_size_kb": 64, 00:14:49.739 "state": "online", 00:14:49.739 "raid_level": "raid5f", 00:14:49.739 "superblock": true, 00:14:49.739 "num_base_bdevs": 3, 00:14:49.739 "num_base_bdevs_discovered": 3, 00:14:49.740 "num_base_bdevs_operational": 3, 00:14:49.740 "base_bdevs_list": [ 00:14:49.740 { 00:14:49.740 "name": "pt1", 00:14:49.740 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:49.740 "is_configured": true, 00:14:49.740 "data_offset": 2048, 00:14:49.740 "data_size": 63488 00:14:49.740 }, 00:14:49.740 { 00:14:49.740 "name": "pt2", 00:14:49.740 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:49.740 "is_configured": true, 00:14:49.740 "data_offset": 2048, 00:14:49.740 "data_size": 63488 00:14:49.740 }, 00:14:49.740 { 00:14:49.740 "name": "pt3", 00:14:49.740 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:49.740 "is_configured": true, 00:14:49.740 "data_offset": 2048, 00:14:49.740 "data_size": 63488 00:14:49.740 } 00:14:49.740 ] 00:14:49.740 }' 00:14:49.740 13:24:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:49.740 13:24:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.000 13:24:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:14:50.000 13:24:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:14:50.000 13:24:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:50.000 13:24:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:50.000 13:24:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:50.000 13:24:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:50.000 13:24:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:50.000 13:24:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.000 13:24:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.000 13:24:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:50.000 [2024-11-17 13:24:39.162119] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:50.000 13:24:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.000 13:24:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:50.000 "name": "raid_bdev1", 00:14:50.000 "aliases": [ 00:14:50.000 "bc07c27d-39a2-4e1f-bf64-73e56a81921c" 00:14:50.000 ], 00:14:50.000 "product_name": "Raid Volume", 00:14:50.000 "block_size": 512, 00:14:50.000 "num_blocks": 126976, 00:14:50.000 "uuid": "bc07c27d-39a2-4e1f-bf64-73e56a81921c", 00:14:50.000 "assigned_rate_limits": { 00:14:50.000 "rw_ios_per_sec": 0, 00:14:50.000 "rw_mbytes_per_sec": 0, 00:14:50.000 "r_mbytes_per_sec": 0, 00:14:50.000 "w_mbytes_per_sec": 0 00:14:50.000 }, 00:14:50.000 "claimed": false, 00:14:50.000 "zoned": false, 00:14:50.000 "supported_io_types": { 00:14:50.000 "read": true, 00:14:50.000 "write": true, 00:14:50.000 "unmap": false, 00:14:50.000 "flush": false, 00:14:50.000 "reset": true, 00:14:50.000 "nvme_admin": false, 00:14:50.000 "nvme_io": false, 00:14:50.000 "nvme_io_md": false, 00:14:50.000 "write_zeroes": true, 00:14:50.000 "zcopy": false, 00:14:50.000 "get_zone_info": false, 00:14:50.000 "zone_management": false, 00:14:50.000 "zone_append": false, 00:14:50.000 "compare": false, 00:14:50.000 "compare_and_write": false, 00:14:50.000 "abort": false, 00:14:50.000 "seek_hole": false, 00:14:50.000 "seek_data": false, 00:14:50.000 "copy": false, 00:14:50.000 "nvme_iov_md": false 00:14:50.000 }, 00:14:50.000 "driver_specific": { 00:14:50.000 "raid": { 00:14:50.000 "uuid": "bc07c27d-39a2-4e1f-bf64-73e56a81921c", 00:14:50.000 "strip_size_kb": 64, 00:14:50.000 "state": "online", 00:14:50.000 "raid_level": "raid5f", 00:14:50.000 "superblock": true, 00:14:50.000 "num_base_bdevs": 3, 00:14:50.000 "num_base_bdevs_discovered": 3, 00:14:50.000 "num_base_bdevs_operational": 3, 00:14:50.000 "base_bdevs_list": [ 00:14:50.000 { 00:14:50.000 "name": "pt1", 00:14:50.000 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:50.000 "is_configured": true, 00:14:50.000 "data_offset": 2048, 00:14:50.000 "data_size": 63488 00:14:50.000 }, 00:14:50.000 { 00:14:50.000 "name": "pt2", 00:14:50.000 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:50.000 "is_configured": true, 00:14:50.000 "data_offset": 2048, 00:14:50.000 "data_size": 63488 00:14:50.000 }, 00:14:50.000 { 00:14:50.000 "name": "pt3", 00:14:50.000 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:50.000 "is_configured": true, 00:14:50.000 "data_offset": 2048, 00:14:50.000 "data_size": 63488 00:14:50.000 } 00:14:50.000 ] 00:14:50.000 } 00:14:50.000 } 00:14:50.000 }' 00:14:50.000 13:24:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:50.261 13:24:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:14:50.261 pt2 00:14:50.261 pt3' 00:14:50.261 13:24:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:50.261 13:24:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:50.261 13:24:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:50.261 13:24:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:50.261 13:24:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:14:50.261 13:24:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.261 13:24:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.261 13:24:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.261 13:24:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:50.261 13:24:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:50.261 13:24:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:50.261 13:24:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:50.261 13:24:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:14:50.261 13:24:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.261 13:24:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.261 13:24:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.261 13:24:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:50.261 13:24:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:50.261 13:24:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:50.261 13:24:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:50.261 13:24:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:14:50.261 13:24:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.261 13:24:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.261 13:24:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.261 13:24:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:50.261 13:24:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:50.261 13:24:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:14:50.261 13:24:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:50.261 13:24:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.261 13:24:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.261 [2024-11-17 13:24:39.397627] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:50.261 13:24:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.261 13:24:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' bc07c27d-39a2-4e1f-bf64-73e56a81921c '!=' bc07c27d-39a2-4e1f-bf64-73e56a81921c ']' 00:14:50.261 13:24:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:14:50.261 13:24:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:50.261 13:24:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:14:50.261 13:24:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:14:50.261 13:24:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.261 13:24:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.261 [2024-11-17 13:24:39.441460] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:14:50.261 13:24:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.261 13:24:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:50.261 13:24:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:50.261 13:24:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:50.261 13:24:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:50.261 13:24:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:50.261 13:24:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:50.261 13:24:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:50.261 13:24:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:50.261 13:24:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:50.261 13:24:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:50.261 13:24:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:50.261 13:24:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:50.261 13:24:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.261 13:24:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.261 13:24:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.522 13:24:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:50.522 "name": "raid_bdev1", 00:14:50.522 "uuid": "bc07c27d-39a2-4e1f-bf64-73e56a81921c", 00:14:50.522 "strip_size_kb": 64, 00:14:50.522 "state": "online", 00:14:50.522 "raid_level": "raid5f", 00:14:50.522 "superblock": true, 00:14:50.522 "num_base_bdevs": 3, 00:14:50.522 "num_base_bdevs_discovered": 2, 00:14:50.522 "num_base_bdevs_operational": 2, 00:14:50.522 "base_bdevs_list": [ 00:14:50.522 { 00:14:50.522 "name": null, 00:14:50.522 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:50.522 "is_configured": false, 00:14:50.522 "data_offset": 0, 00:14:50.522 "data_size": 63488 00:14:50.522 }, 00:14:50.522 { 00:14:50.522 "name": "pt2", 00:14:50.522 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:50.522 "is_configured": true, 00:14:50.522 "data_offset": 2048, 00:14:50.522 "data_size": 63488 00:14:50.522 }, 00:14:50.522 { 00:14:50.522 "name": "pt3", 00:14:50.522 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:50.522 "is_configured": true, 00:14:50.522 "data_offset": 2048, 00:14:50.522 "data_size": 63488 00:14:50.522 } 00:14:50.522 ] 00:14:50.522 }' 00:14:50.522 13:24:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:50.522 13:24:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.782 13:24:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:50.782 13:24:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.782 13:24:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.782 [2024-11-17 13:24:39.820794] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:50.782 [2024-11-17 13:24:39.820887] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:50.782 [2024-11-17 13:24:39.820986] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:50.782 [2024-11-17 13:24:39.821047] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:50.782 [2024-11-17 13:24:39.821061] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:14:50.782 13:24:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.782 13:24:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:50.782 13:24:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:14:50.782 13:24:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.782 13:24:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.782 13:24:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.782 13:24:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:14:50.782 13:24:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:14:50.782 13:24:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:14:50.782 13:24:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:50.782 13:24:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:14:50.782 13:24:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.782 13:24:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.782 13:24:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.782 13:24:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:14:50.782 13:24:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:50.782 13:24:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:14:50.782 13:24:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.782 13:24:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.782 13:24:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.782 13:24:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:14:50.782 13:24:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:50.782 13:24:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:14:50.782 13:24:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:14:50.782 13:24:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:50.782 13:24:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.782 13:24:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.782 [2024-11-17 13:24:39.892620] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:50.782 [2024-11-17 13:24:39.892738] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:50.782 [2024-11-17 13:24:39.892757] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:14:50.782 [2024-11-17 13:24:39.892768] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:50.782 [2024-11-17 13:24:39.894910] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:50.782 [2024-11-17 13:24:39.894947] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:50.782 [2024-11-17 13:24:39.895022] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:50.782 [2024-11-17 13:24:39.895073] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:50.782 pt2 00:14:50.782 13:24:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.782 13:24:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:14:50.782 13:24:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:50.783 13:24:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:50.783 13:24:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:50.783 13:24:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:50.783 13:24:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:50.783 13:24:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:50.783 13:24:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:50.783 13:24:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:50.783 13:24:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:50.783 13:24:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:50.783 13:24:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:50.783 13:24:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.783 13:24:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.783 13:24:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.783 13:24:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:50.783 "name": "raid_bdev1", 00:14:50.783 "uuid": "bc07c27d-39a2-4e1f-bf64-73e56a81921c", 00:14:50.783 "strip_size_kb": 64, 00:14:50.783 "state": "configuring", 00:14:50.783 "raid_level": "raid5f", 00:14:50.783 "superblock": true, 00:14:50.783 "num_base_bdevs": 3, 00:14:50.783 "num_base_bdevs_discovered": 1, 00:14:50.783 "num_base_bdevs_operational": 2, 00:14:50.783 "base_bdevs_list": [ 00:14:50.783 { 00:14:50.783 "name": null, 00:14:50.783 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:50.783 "is_configured": false, 00:14:50.783 "data_offset": 2048, 00:14:50.783 "data_size": 63488 00:14:50.783 }, 00:14:50.783 { 00:14:50.783 "name": "pt2", 00:14:50.783 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:50.783 "is_configured": true, 00:14:50.783 "data_offset": 2048, 00:14:50.783 "data_size": 63488 00:14:50.783 }, 00:14:50.783 { 00:14:50.783 "name": null, 00:14:50.783 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:50.783 "is_configured": false, 00:14:50.783 "data_offset": 2048, 00:14:50.783 "data_size": 63488 00:14:50.783 } 00:14:50.783 ] 00:14:50.783 }' 00:14:50.783 13:24:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:50.783 13:24:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.352 13:24:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:14:51.352 13:24:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:14:51.352 13:24:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:14:51.352 13:24:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:51.352 13:24:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.352 13:24:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.352 [2024-11-17 13:24:40.295955] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:51.352 [2024-11-17 13:24:40.296070] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:51.352 [2024-11-17 13:24:40.296112] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:14:51.352 [2024-11-17 13:24:40.296143] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:51.352 [2024-11-17 13:24:40.296688] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:51.352 [2024-11-17 13:24:40.296756] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:51.352 [2024-11-17 13:24:40.296883] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:14:51.352 [2024-11-17 13:24:40.296958] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:51.352 [2024-11-17 13:24:40.297116] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:14:51.352 [2024-11-17 13:24:40.297159] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:51.352 [2024-11-17 13:24:40.297463] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:51.352 [2024-11-17 13:24:40.302595] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:14:51.352 [2024-11-17 13:24:40.302649] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:14:51.352 [2024-11-17 13:24:40.303008] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:51.352 pt3 00:14:51.352 13:24:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.352 13:24:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:51.352 13:24:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:51.352 13:24:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:51.352 13:24:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:51.352 13:24:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:51.352 13:24:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:51.352 13:24:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:51.352 13:24:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:51.352 13:24:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:51.352 13:24:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:51.352 13:24:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:51.352 13:24:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:51.352 13:24:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.352 13:24:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.352 13:24:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.352 13:24:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:51.352 "name": "raid_bdev1", 00:14:51.352 "uuid": "bc07c27d-39a2-4e1f-bf64-73e56a81921c", 00:14:51.352 "strip_size_kb": 64, 00:14:51.352 "state": "online", 00:14:51.352 "raid_level": "raid5f", 00:14:51.352 "superblock": true, 00:14:51.352 "num_base_bdevs": 3, 00:14:51.352 "num_base_bdevs_discovered": 2, 00:14:51.352 "num_base_bdevs_operational": 2, 00:14:51.352 "base_bdevs_list": [ 00:14:51.352 { 00:14:51.352 "name": null, 00:14:51.352 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:51.352 "is_configured": false, 00:14:51.352 "data_offset": 2048, 00:14:51.352 "data_size": 63488 00:14:51.352 }, 00:14:51.352 { 00:14:51.352 "name": "pt2", 00:14:51.352 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:51.352 "is_configured": true, 00:14:51.352 "data_offset": 2048, 00:14:51.352 "data_size": 63488 00:14:51.352 }, 00:14:51.352 { 00:14:51.352 "name": "pt3", 00:14:51.352 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:51.352 "is_configured": true, 00:14:51.352 "data_offset": 2048, 00:14:51.352 "data_size": 63488 00:14:51.352 } 00:14:51.352 ] 00:14:51.352 }' 00:14:51.352 13:24:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:51.352 13:24:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.611 13:24:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:51.611 13:24:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.611 13:24:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.611 [2024-11-17 13:24:40.657413] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:51.611 [2024-11-17 13:24:40.657463] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:51.611 [2024-11-17 13:24:40.657547] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:51.611 [2024-11-17 13:24:40.657614] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:51.611 [2024-11-17 13:24:40.657636] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:14:51.611 13:24:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.611 13:24:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:14:51.611 13:24:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:51.611 13:24:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.611 13:24:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.611 13:24:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.611 13:24:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:14:51.611 13:24:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:14:51.611 13:24:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:14:51.611 13:24:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:14:51.611 13:24:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:14:51.611 13:24:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.611 13:24:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.611 13:24:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.611 13:24:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:51.611 13:24:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.611 13:24:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.611 [2024-11-17 13:24:40.717400] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:51.611 [2024-11-17 13:24:40.717469] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:51.611 [2024-11-17 13:24:40.717495] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:14:51.611 [2024-11-17 13:24:40.717509] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:51.611 [2024-11-17 13:24:40.720092] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:51.611 [2024-11-17 13:24:40.720135] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:51.611 [2024-11-17 13:24:40.720245] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:14:51.611 [2024-11-17 13:24:40.720318] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:51.611 [2024-11-17 13:24:40.720455] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:14:51.611 [2024-11-17 13:24:40.720472] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:51.611 [2024-11-17 13:24:40.720491] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:14:51.611 [2024-11-17 13:24:40.720579] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:51.611 pt1 00:14:51.611 13:24:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.611 13:24:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:14:51.611 13:24:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:14:51.611 13:24:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:51.611 13:24:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:51.611 13:24:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:51.611 13:24:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:51.611 13:24:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:51.611 13:24:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:51.611 13:24:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:51.611 13:24:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:51.611 13:24:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:51.611 13:24:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:51.611 13:24:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.611 13:24:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.611 13:24:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:51.611 13:24:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.611 13:24:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:51.611 "name": "raid_bdev1", 00:14:51.611 "uuid": "bc07c27d-39a2-4e1f-bf64-73e56a81921c", 00:14:51.611 "strip_size_kb": 64, 00:14:51.611 "state": "configuring", 00:14:51.611 "raid_level": "raid5f", 00:14:51.611 "superblock": true, 00:14:51.611 "num_base_bdevs": 3, 00:14:51.611 "num_base_bdevs_discovered": 1, 00:14:51.611 "num_base_bdevs_operational": 2, 00:14:51.611 "base_bdevs_list": [ 00:14:51.611 { 00:14:51.611 "name": null, 00:14:51.611 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:51.611 "is_configured": false, 00:14:51.611 "data_offset": 2048, 00:14:51.611 "data_size": 63488 00:14:51.611 }, 00:14:51.611 { 00:14:51.611 "name": "pt2", 00:14:51.611 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:51.611 "is_configured": true, 00:14:51.611 "data_offset": 2048, 00:14:51.611 "data_size": 63488 00:14:51.611 }, 00:14:51.611 { 00:14:51.611 "name": null, 00:14:51.611 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:51.611 "is_configured": false, 00:14:51.611 "data_offset": 2048, 00:14:51.611 "data_size": 63488 00:14:51.611 } 00:14:51.611 ] 00:14:51.611 }' 00:14:51.611 13:24:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:51.611 13:24:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.179 13:24:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:14:52.179 13:24:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.179 13:24:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:14:52.179 13:24:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.179 13:24:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.179 13:24:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:14:52.179 13:24:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:52.179 13:24:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.179 13:24:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.179 [2024-11-17 13:24:41.168712] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:52.179 [2024-11-17 13:24:41.168837] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:52.179 [2024-11-17 13:24:41.168917] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:14:52.179 [2024-11-17 13:24:41.168956] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:52.179 [2024-11-17 13:24:41.169558] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:52.179 [2024-11-17 13:24:41.169643] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:52.179 [2024-11-17 13:24:41.169821] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:14:52.179 [2024-11-17 13:24:41.169892] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:52.179 [2024-11-17 13:24:41.170100] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:14:52.179 [2024-11-17 13:24:41.170149] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:52.179 [2024-11-17 13:24:41.170526] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:14:52.179 [2024-11-17 13:24:41.177386] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:14:52.179 [2024-11-17 13:24:41.177455] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:14:52.179 [2024-11-17 13:24:41.177854] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:52.179 pt3 00:14:52.179 13:24:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.179 13:24:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:52.179 13:24:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:52.179 13:24:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:52.179 13:24:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:52.179 13:24:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:52.179 13:24:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:52.179 13:24:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:52.179 13:24:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:52.179 13:24:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:52.179 13:24:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:52.179 13:24:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:52.179 13:24:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.179 13:24:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.179 13:24:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.179 13:24:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.179 13:24:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:52.179 "name": "raid_bdev1", 00:14:52.179 "uuid": "bc07c27d-39a2-4e1f-bf64-73e56a81921c", 00:14:52.179 "strip_size_kb": 64, 00:14:52.179 "state": "online", 00:14:52.179 "raid_level": "raid5f", 00:14:52.179 "superblock": true, 00:14:52.179 "num_base_bdevs": 3, 00:14:52.179 "num_base_bdevs_discovered": 2, 00:14:52.179 "num_base_bdevs_operational": 2, 00:14:52.179 "base_bdevs_list": [ 00:14:52.179 { 00:14:52.179 "name": null, 00:14:52.179 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:52.179 "is_configured": false, 00:14:52.179 "data_offset": 2048, 00:14:52.179 "data_size": 63488 00:14:52.179 }, 00:14:52.179 { 00:14:52.179 "name": "pt2", 00:14:52.179 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:52.179 "is_configured": true, 00:14:52.179 "data_offset": 2048, 00:14:52.179 "data_size": 63488 00:14:52.179 }, 00:14:52.179 { 00:14:52.179 "name": "pt3", 00:14:52.179 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:52.180 "is_configured": true, 00:14:52.180 "data_offset": 2048, 00:14:52.180 "data_size": 63488 00:14:52.180 } 00:14:52.180 ] 00:14:52.180 }' 00:14:52.180 13:24:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:52.180 13:24:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.439 13:24:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:14:52.439 13:24:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.439 13:24:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.439 13:24:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:14:52.439 13:24:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.439 13:24:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:14:52.439 13:24:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:52.439 13:24:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.439 13:24:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.439 13:24:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:14:52.439 [2024-11-17 13:24:41.617444] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:52.439 13:24:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.439 13:24:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' bc07c27d-39a2-4e1f-bf64-73e56a81921c '!=' bc07c27d-39a2-4e1f-bf64-73e56a81921c ']' 00:14:52.439 13:24:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 81012 00:14:52.439 13:24:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 81012 ']' 00:14:52.439 13:24:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # kill -0 81012 00:14:52.439 13:24:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # uname 00:14:52.439 13:24:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:52.439 13:24:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81012 00:14:52.699 13:24:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:52.699 13:24:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:52.699 killing process with pid 81012 00:14:52.699 13:24:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81012' 00:14:52.699 13:24:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@973 -- # kill 81012 00:14:52.699 [2024-11-17 13:24:41.686384] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:52.699 [2024-11-17 13:24:41.686485] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:52.699 [2024-11-17 13:24:41.686553] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:52.699 13:24:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@978 -- # wait 81012 00:14:52.699 [2024-11-17 13:24:41.686565] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:14:52.959 [2024-11-17 13:24:42.014120] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:54.340 13:24:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:14:54.340 00:14:54.340 real 0m7.267s 00:14:54.340 user 0m11.040s 00:14:54.340 sys 0m1.299s 00:14:54.340 13:24:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:54.340 ************************************ 00:14:54.340 END TEST raid5f_superblock_test 00:14:54.340 ************************************ 00:14:54.340 13:24:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.340 13:24:43 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:14:54.340 13:24:43 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 3 false false true 00:14:54.340 13:24:43 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:14:54.340 13:24:43 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:54.340 13:24:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:54.340 ************************************ 00:14:54.340 START TEST raid5f_rebuild_test 00:14:54.340 ************************************ 00:14:54.340 13:24:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 3 false false true 00:14:54.340 13:24:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:14:54.340 13:24:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:14:54.340 13:24:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:14:54.340 13:24:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:14:54.340 13:24:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:54.340 13:24:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:54.340 13:24:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:54.340 13:24:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:54.340 13:24:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:54.340 13:24:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:54.340 13:24:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:54.340 13:24:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:54.340 13:24:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:54.340 13:24:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:14:54.340 13:24:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:54.340 13:24:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:54.340 13:24:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:14:54.340 13:24:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:54.340 13:24:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:54.340 13:24:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:54.340 13:24:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:54.340 13:24:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:54.340 13:24:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:54.340 13:24:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:14:54.340 13:24:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:14:54.340 13:24:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:14:54.340 13:24:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:14:54.340 13:24:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:14:54.340 13:24:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=81455 00:14:54.340 13:24:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:54.340 13:24:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 81455 00:14:54.340 13:24:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 81455 ']' 00:14:54.340 13:24:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:54.340 13:24:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:54.340 13:24:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:54.340 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:54.340 13:24:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:54.340 13:24:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.340 [2024-11-17 13:24:43.361175] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:14:54.340 [2024-11-17 13:24:43.361432] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81455 ] 00:14:54.340 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:54.340 Zero copy mechanism will not be used. 00:14:54.340 [2024-11-17 13:24:43.535734] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:54.600 [2024-11-17 13:24:43.655255] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:54.859 [2024-11-17 13:24:43.854825] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:54.859 [2024-11-17 13:24:43.854968] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:55.118 13:24:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:55.118 13:24:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:14:55.118 13:24:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:55.118 13:24:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:55.118 13:24:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.118 13:24:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.118 BaseBdev1_malloc 00:14:55.118 13:24:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.118 13:24:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:55.118 13:24:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.118 13:24:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.119 [2024-11-17 13:24:44.251240] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:55.119 [2024-11-17 13:24:44.251307] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:55.119 [2024-11-17 13:24:44.251332] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:55.119 [2024-11-17 13:24:44.251343] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:55.119 [2024-11-17 13:24:44.253448] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:55.119 [2024-11-17 13:24:44.253539] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:55.119 BaseBdev1 00:14:55.119 13:24:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.119 13:24:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:55.119 13:24:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:55.119 13:24:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.119 13:24:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.119 BaseBdev2_malloc 00:14:55.119 13:24:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.119 13:24:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:55.119 13:24:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.119 13:24:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.119 [2024-11-17 13:24:44.308316] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:55.119 [2024-11-17 13:24:44.308374] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:55.119 [2024-11-17 13:24:44.308392] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:55.119 [2024-11-17 13:24:44.308404] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:55.119 [2024-11-17 13:24:44.310494] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:55.119 [2024-11-17 13:24:44.310532] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:55.119 BaseBdev2 00:14:55.119 13:24:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.119 13:24:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:55.119 13:24:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:55.119 13:24:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.119 13:24:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.379 BaseBdev3_malloc 00:14:55.379 13:24:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.379 13:24:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:14:55.379 13:24:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.379 13:24:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.379 [2024-11-17 13:24:44.381737] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:14:55.379 [2024-11-17 13:24:44.381839] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:55.379 [2024-11-17 13:24:44.381867] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:55.379 [2024-11-17 13:24:44.381878] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:55.379 [2024-11-17 13:24:44.384089] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:55.379 [2024-11-17 13:24:44.384133] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:55.379 BaseBdev3 00:14:55.379 13:24:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.379 13:24:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:55.379 13:24:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.379 13:24:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.379 spare_malloc 00:14:55.379 13:24:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.379 13:24:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:55.379 13:24:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.379 13:24:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.379 spare_delay 00:14:55.380 13:24:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.380 13:24:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:55.380 13:24:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.380 13:24:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.380 [2024-11-17 13:24:44.452003] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:55.380 [2024-11-17 13:24:44.452058] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:55.380 [2024-11-17 13:24:44.452077] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:14:55.380 [2024-11-17 13:24:44.452087] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:55.380 [2024-11-17 13:24:44.454366] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:55.380 [2024-11-17 13:24:44.454410] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:55.380 spare 00:14:55.380 13:24:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.380 13:24:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:14:55.380 13:24:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.380 13:24:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.380 [2024-11-17 13:24:44.464050] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:55.380 [2024-11-17 13:24:44.465793] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:55.380 [2024-11-17 13:24:44.465855] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:55.380 [2024-11-17 13:24:44.465941] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:55.380 [2024-11-17 13:24:44.465952] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:14:55.380 [2024-11-17 13:24:44.466269] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:14:55.380 [2024-11-17 13:24:44.472362] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:55.380 [2024-11-17 13:24:44.472385] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:55.380 [2024-11-17 13:24:44.472598] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:55.380 13:24:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.380 13:24:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:55.380 13:24:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:55.380 13:24:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:55.380 13:24:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:55.380 13:24:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:55.380 13:24:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:55.380 13:24:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:55.380 13:24:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:55.380 13:24:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:55.380 13:24:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:55.380 13:24:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:55.380 13:24:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:55.380 13:24:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.380 13:24:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.380 13:24:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.380 13:24:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:55.380 "name": "raid_bdev1", 00:14:55.380 "uuid": "657b6cb2-17fd-4d9d-b016-888d4a0358fc", 00:14:55.380 "strip_size_kb": 64, 00:14:55.380 "state": "online", 00:14:55.380 "raid_level": "raid5f", 00:14:55.380 "superblock": false, 00:14:55.380 "num_base_bdevs": 3, 00:14:55.380 "num_base_bdevs_discovered": 3, 00:14:55.380 "num_base_bdevs_operational": 3, 00:14:55.380 "base_bdevs_list": [ 00:14:55.380 { 00:14:55.380 "name": "BaseBdev1", 00:14:55.380 "uuid": "fb5e32cc-f4e1-5e0a-9902-5d5b5d57920c", 00:14:55.380 "is_configured": true, 00:14:55.380 "data_offset": 0, 00:14:55.380 "data_size": 65536 00:14:55.380 }, 00:14:55.380 { 00:14:55.380 "name": "BaseBdev2", 00:14:55.380 "uuid": "2a34b76a-7bdf-5de0-9312-52db2c23b809", 00:14:55.380 "is_configured": true, 00:14:55.380 "data_offset": 0, 00:14:55.380 "data_size": 65536 00:14:55.380 }, 00:14:55.380 { 00:14:55.380 "name": "BaseBdev3", 00:14:55.380 "uuid": "9488662b-029c-5d6e-bdd5-901862ef9e45", 00:14:55.380 "is_configured": true, 00:14:55.380 "data_offset": 0, 00:14:55.380 "data_size": 65536 00:14:55.380 } 00:14:55.380 ] 00:14:55.380 }' 00:14:55.380 13:24:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:55.380 13:24:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.950 13:24:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:55.950 13:24:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:55.950 13:24:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.950 13:24:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.950 [2024-11-17 13:24:44.914421] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:55.950 13:24:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.950 13:24:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=131072 00:14:55.950 13:24:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:55.950 13:24:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:55.950 13:24:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.950 13:24:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.950 13:24:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.950 13:24:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:14:55.950 13:24:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:14:55.950 13:24:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:14:55.950 13:24:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:14:55.950 13:24:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:14:55.950 13:24:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:55.950 13:24:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:14:55.950 13:24:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:55.950 13:24:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:55.950 13:24:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:55.950 13:24:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:14:55.950 13:24:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:55.950 13:24:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:55.950 13:24:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:14:55.950 [2024-11-17 13:24:45.157797] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:14:56.208 /dev/nbd0 00:14:56.208 13:24:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:56.208 13:24:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:56.208 13:24:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:56.208 13:24:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:14:56.208 13:24:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:56.208 13:24:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:56.209 13:24:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:56.209 13:24:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:14:56.209 13:24:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:56.209 13:24:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:56.209 13:24:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:56.209 1+0 records in 00:14:56.209 1+0 records out 00:14:56.209 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000496713 s, 8.2 MB/s 00:14:56.209 13:24:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:56.209 13:24:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:14:56.209 13:24:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:56.209 13:24:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:56.209 13:24:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:14:56.209 13:24:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:56.209 13:24:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:56.209 13:24:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:14:56.209 13:24:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:14:56.209 13:24:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 128 00:14:56.209 13:24:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=512 oflag=direct 00:14:56.467 512+0 records in 00:14:56.467 512+0 records out 00:14:56.467 67108864 bytes (67 MB, 64 MiB) copied, 0.361392 s, 186 MB/s 00:14:56.467 13:24:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:56.467 13:24:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:56.467 13:24:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:56.467 13:24:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:56.467 13:24:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:14:56.467 13:24:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:56.467 13:24:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:56.726 [2024-11-17 13:24:45.785818] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:56.726 13:24:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:56.726 13:24:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:56.726 13:24:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:56.726 13:24:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:56.726 13:24:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:56.726 13:24:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:56.726 13:24:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:56.726 13:24:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:56.726 13:24:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:56.726 13:24:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.726 13:24:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.726 [2024-11-17 13:24:45.829055] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:56.726 13:24:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.726 13:24:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:56.726 13:24:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:56.726 13:24:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:56.726 13:24:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:56.726 13:24:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:56.726 13:24:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:56.726 13:24:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:56.726 13:24:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:56.726 13:24:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:56.726 13:24:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:56.726 13:24:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:56.726 13:24:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.726 13:24:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.726 13:24:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.726 13:24:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.726 13:24:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:56.726 "name": "raid_bdev1", 00:14:56.726 "uuid": "657b6cb2-17fd-4d9d-b016-888d4a0358fc", 00:14:56.726 "strip_size_kb": 64, 00:14:56.726 "state": "online", 00:14:56.726 "raid_level": "raid5f", 00:14:56.726 "superblock": false, 00:14:56.726 "num_base_bdevs": 3, 00:14:56.726 "num_base_bdevs_discovered": 2, 00:14:56.726 "num_base_bdevs_operational": 2, 00:14:56.726 "base_bdevs_list": [ 00:14:56.726 { 00:14:56.726 "name": null, 00:14:56.726 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:56.726 "is_configured": false, 00:14:56.726 "data_offset": 0, 00:14:56.726 "data_size": 65536 00:14:56.726 }, 00:14:56.726 { 00:14:56.726 "name": "BaseBdev2", 00:14:56.726 "uuid": "2a34b76a-7bdf-5de0-9312-52db2c23b809", 00:14:56.727 "is_configured": true, 00:14:56.727 "data_offset": 0, 00:14:56.727 "data_size": 65536 00:14:56.727 }, 00:14:56.727 { 00:14:56.727 "name": "BaseBdev3", 00:14:56.727 "uuid": "9488662b-029c-5d6e-bdd5-901862ef9e45", 00:14:56.727 "is_configured": true, 00:14:56.727 "data_offset": 0, 00:14:56.727 "data_size": 65536 00:14:56.727 } 00:14:56.727 ] 00:14:56.727 }' 00:14:56.727 13:24:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:56.727 13:24:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.302 13:24:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:57.302 13:24:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.302 13:24:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.302 [2024-11-17 13:24:46.288311] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:57.302 [2024-11-17 13:24:46.305579] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b680 00:14:57.302 13:24:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.302 13:24:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:57.302 [2024-11-17 13:24:46.313315] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:58.252 13:24:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:58.253 13:24:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:58.253 13:24:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:58.253 13:24:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:58.253 13:24:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:58.253 13:24:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.253 13:24:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.253 13:24:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.253 13:24:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:58.253 13:24:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.253 13:24:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:58.253 "name": "raid_bdev1", 00:14:58.253 "uuid": "657b6cb2-17fd-4d9d-b016-888d4a0358fc", 00:14:58.253 "strip_size_kb": 64, 00:14:58.253 "state": "online", 00:14:58.253 "raid_level": "raid5f", 00:14:58.253 "superblock": false, 00:14:58.253 "num_base_bdevs": 3, 00:14:58.253 "num_base_bdevs_discovered": 3, 00:14:58.253 "num_base_bdevs_operational": 3, 00:14:58.253 "process": { 00:14:58.253 "type": "rebuild", 00:14:58.253 "target": "spare", 00:14:58.253 "progress": { 00:14:58.253 "blocks": 18432, 00:14:58.253 "percent": 14 00:14:58.253 } 00:14:58.253 }, 00:14:58.253 "base_bdevs_list": [ 00:14:58.253 { 00:14:58.253 "name": "spare", 00:14:58.253 "uuid": "e840c98f-127b-52e5-845a-fd54c29fe0b1", 00:14:58.253 "is_configured": true, 00:14:58.253 "data_offset": 0, 00:14:58.253 "data_size": 65536 00:14:58.253 }, 00:14:58.253 { 00:14:58.253 "name": "BaseBdev2", 00:14:58.253 "uuid": "2a34b76a-7bdf-5de0-9312-52db2c23b809", 00:14:58.253 "is_configured": true, 00:14:58.253 "data_offset": 0, 00:14:58.253 "data_size": 65536 00:14:58.253 }, 00:14:58.253 { 00:14:58.253 "name": "BaseBdev3", 00:14:58.253 "uuid": "9488662b-029c-5d6e-bdd5-901862ef9e45", 00:14:58.253 "is_configured": true, 00:14:58.253 "data_offset": 0, 00:14:58.253 "data_size": 65536 00:14:58.253 } 00:14:58.253 ] 00:14:58.253 }' 00:14:58.253 13:24:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:58.253 13:24:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:58.253 13:24:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:58.253 13:24:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:58.253 13:24:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:58.253 13:24:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.253 13:24:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.253 [2024-11-17 13:24:47.456120] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:58.512 [2024-11-17 13:24:47.523090] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:58.512 [2024-11-17 13:24:47.523162] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:58.512 [2024-11-17 13:24:47.523185] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:58.512 [2024-11-17 13:24:47.523194] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:58.512 13:24:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.512 13:24:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:58.512 13:24:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:58.512 13:24:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:58.512 13:24:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:58.512 13:24:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:58.512 13:24:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:58.512 13:24:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:58.512 13:24:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:58.512 13:24:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:58.512 13:24:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:58.512 13:24:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.512 13:24:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:58.512 13:24:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.512 13:24:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.512 13:24:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.512 13:24:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:58.512 "name": "raid_bdev1", 00:14:58.512 "uuid": "657b6cb2-17fd-4d9d-b016-888d4a0358fc", 00:14:58.512 "strip_size_kb": 64, 00:14:58.512 "state": "online", 00:14:58.512 "raid_level": "raid5f", 00:14:58.512 "superblock": false, 00:14:58.512 "num_base_bdevs": 3, 00:14:58.512 "num_base_bdevs_discovered": 2, 00:14:58.512 "num_base_bdevs_operational": 2, 00:14:58.512 "base_bdevs_list": [ 00:14:58.512 { 00:14:58.512 "name": null, 00:14:58.512 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:58.512 "is_configured": false, 00:14:58.512 "data_offset": 0, 00:14:58.512 "data_size": 65536 00:14:58.512 }, 00:14:58.512 { 00:14:58.512 "name": "BaseBdev2", 00:14:58.512 "uuid": "2a34b76a-7bdf-5de0-9312-52db2c23b809", 00:14:58.512 "is_configured": true, 00:14:58.512 "data_offset": 0, 00:14:58.512 "data_size": 65536 00:14:58.512 }, 00:14:58.512 { 00:14:58.512 "name": "BaseBdev3", 00:14:58.512 "uuid": "9488662b-029c-5d6e-bdd5-901862ef9e45", 00:14:58.512 "is_configured": true, 00:14:58.512 "data_offset": 0, 00:14:58.512 "data_size": 65536 00:14:58.512 } 00:14:58.512 ] 00:14:58.512 }' 00:14:58.512 13:24:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:58.512 13:24:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.080 13:24:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:59.080 13:24:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:59.080 13:24:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:59.080 13:24:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:59.080 13:24:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:59.080 13:24:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:59.080 13:24:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:59.080 13:24:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.080 13:24:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.080 13:24:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.080 13:24:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:59.080 "name": "raid_bdev1", 00:14:59.080 "uuid": "657b6cb2-17fd-4d9d-b016-888d4a0358fc", 00:14:59.080 "strip_size_kb": 64, 00:14:59.080 "state": "online", 00:14:59.080 "raid_level": "raid5f", 00:14:59.080 "superblock": false, 00:14:59.080 "num_base_bdevs": 3, 00:14:59.080 "num_base_bdevs_discovered": 2, 00:14:59.080 "num_base_bdevs_operational": 2, 00:14:59.080 "base_bdevs_list": [ 00:14:59.080 { 00:14:59.080 "name": null, 00:14:59.080 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:59.080 "is_configured": false, 00:14:59.080 "data_offset": 0, 00:14:59.080 "data_size": 65536 00:14:59.080 }, 00:14:59.080 { 00:14:59.080 "name": "BaseBdev2", 00:14:59.080 "uuid": "2a34b76a-7bdf-5de0-9312-52db2c23b809", 00:14:59.080 "is_configured": true, 00:14:59.080 "data_offset": 0, 00:14:59.080 "data_size": 65536 00:14:59.080 }, 00:14:59.080 { 00:14:59.080 "name": "BaseBdev3", 00:14:59.080 "uuid": "9488662b-029c-5d6e-bdd5-901862ef9e45", 00:14:59.080 "is_configured": true, 00:14:59.080 "data_offset": 0, 00:14:59.080 "data_size": 65536 00:14:59.080 } 00:14:59.080 ] 00:14:59.080 }' 00:14:59.080 13:24:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:59.080 13:24:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:59.080 13:24:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:59.080 13:24:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:59.080 13:24:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:59.080 13:24:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.080 13:24:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.080 [2024-11-17 13:24:48.138074] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:59.080 [2024-11-17 13:24:48.155878] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:14:59.080 13:24:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.080 13:24:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:59.080 [2024-11-17 13:24:48.164286] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:00.017 13:24:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:00.017 13:24:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:00.017 13:24:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:00.018 13:24:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:00.018 13:24:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:00.018 13:24:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.018 13:24:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:00.018 13:24:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.018 13:24:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.018 13:24:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.018 13:24:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:00.018 "name": "raid_bdev1", 00:15:00.018 "uuid": "657b6cb2-17fd-4d9d-b016-888d4a0358fc", 00:15:00.018 "strip_size_kb": 64, 00:15:00.018 "state": "online", 00:15:00.018 "raid_level": "raid5f", 00:15:00.018 "superblock": false, 00:15:00.018 "num_base_bdevs": 3, 00:15:00.018 "num_base_bdevs_discovered": 3, 00:15:00.018 "num_base_bdevs_operational": 3, 00:15:00.018 "process": { 00:15:00.018 "type": "rebuild", 00:15:00.018 "target": "spare", 00:15:00.018 "progress": { 00:15:00.018 "blocks": 20480, 00:15:00.018 "percent": 15 00:15:00.018 } 00:15:00.018 }, 00:15:00.018 "base_bdevs_list": [ 00:15:00.018 { 00:15:00.018 "name": "spare", 00:15:00.018 "uuid": "e840c98f-127b-52e5-845a-fd54c29fe0b1", 00:15:00.018 "is_configured": true, 00:15:00.018 "data_offset": 0, 00:15:00.018 "data_size": 65536 00:15:00.018 }, 00:15:00.018 { 00:15:00.018 "name": "BaseBdev2", 00:15:00.018 "uuid": "2a34b76a-7bdf-5de0-9312-52db2c23b809", 00:15:00.018 "is_configured": true, 00:15:00.018 "data_offset": 0, 00:15:00.018 "data_size": 65536 00:15:00.018 }, 00:15:00.018 { 00:15:00.018 "name": "BaseBdev3", 00:15:00.018 "uuid": "9488662b-029c-5d6e-bdd5-901862ef9e45", 00:15:00.018 "is_configured": true, 00:15:00.018 "data_offset": 0, 00:15:00.018 "data_size": 65536 00:15:00.018 } 00:15:00.018 ] 00:15:00.018 }' 00:15:00.018 13:24:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:00.276 13:24:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:00.276 13:24:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:00.276 13:24:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:00.276 13:24:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:15:00.276 13:24:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:15:00.276 13:24:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:15:00.276 13:24:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=539 00:15:00.276 13:24:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:00.276 13:24:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:00.276 13:24:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:00.276 13:24:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:00.276 13:24:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:00.276 13:24:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:00.276 13:24:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.276 13:24:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.276 13:24:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.276 13:24:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:00.276 13:24:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.276 13:24:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:00.276 "name": "raid_bdev1", 00:15:00.276 "uuid": "657b6cb2-17fd-4d9d-b016-888d4a0358fc", 00:15:00.276 "strip_size_kb": 64, 00:15:00.276 "state": "online", 00:15:00.276 "raid_level": "raid5f", 00:15:00.276 "superblock": false, 00:15:00.276 "num_base_bdevs": 3, 00:15:00.276 "num_base_bdevs_discovered": 3, 00:15:00.276 "num_base_bdevs_operational": 3, 00:15:00.276 "process": { 00:15:00.276 "type": "rebuild", 00:15:00.276 "target": "spare", 00:15:00.276 "progress": { 00:15:00.276 "blocks": 22528, 00:15:00.276 "percent": 17 00:15:00.276 } 00:15:00.276 }, 00:15:00.276 "base_bdevs_list": [ 00:15:00.276 { 00:15:00.276 "name": "spare", 00:15:00.276 "uuid": "e840c98f-127b-52e5-845a-fd54c29fe0b1", 00:15:00.276 "is_configured": true, 00:15:00.276 "data_offset": 0, 00:15:00.276 "data_size": 65536 00:15:00.276 }, 00:15:00.276 { 00:15:00.276 "name": "BaseBdev2", 00:15:00.276 "uuid": "2a34b76a-7bdf-5de0-9312-52db2c23b809", 00:15:00.276 "is_configured": true, 00:15:00.276 "data_offset": 0, 00:15:00.276 "data_size": 65536 00:15:00.276 }, 00:15:00.276 { 00:15:00.276 "name": "BaseBdev3", 00:15:00.276 "uuid": "9488662b-029c-5d6e-bdd5-901862ef9e45", 00:15:00.276 "is_configured": true, 00:15:00.276 "data_offset": 0, 00:15:00.276 "data_size": 65536 00:15:00.276 } 00:15:00.276 ] 00:15:00.276 }' 00:15:00.276 13:24:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:00.276 13:24:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:00.276 13:24:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:00.276 13:24:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:00.276 13:24:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:01.652 13:24:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:01.652 13:24:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:01.652 13:24:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:01.653 13:24:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:01.653 13:24:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:01.653 13:24:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:01.653 13:24:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:01.653 13:24:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:01.653 13:24:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.653 13:24:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.653 13:24:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.653 13:24:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:01.653 "name": "raid_bdev1", 00:15:01.653 "uuid": "657b6cb2-17fd-4d9d-b016-888d4a0358fc", 00:15:01.653 "strip_size_kb": 64, 00:15:01.653 "state": "online", 00:15:01.653 "raid_level": "raid5f", 00:15:01.653 "superblock": false, 00:15:01.653 "num_base_bdevs": 3, 00:15:01.653 "num_base_bdevs_discovered": 3, 00:15:01.653 "num_base_bdevs_operational": 3, 00:15:01.653 "process": { 00:15:01.653 "type": "rebuild", 00:15:01.653 "target": "spare", 00:15:01.653 "progress": { 00:15:01.653 "blocks": 45056, 00:15:01.653 "percent": 34 00:15:01.653 } 00:15:01.653 }, 00:15:01.653 "base_bdevs_list": [ 00:15:01.653 { 00:15:01.653 "name": "spare", 00:15:01.653 "uuid": "e840c98f-127b-52e5-845a-fd54c29fe0b1", 00:15:01.653 "is_configured": true, 00:15:01.653 "data_offset": 0, 00:15:01.653 "data_size": 65536 00:15:01.653 }, 00:15:01.653 { 00:15:01.653 "name": "BaseBdev2", 00:15:01.653 "uuid": "2a34b76a-7bdf-5de0-9312-52db2c23b809", 00:15:01.653 "is_configured": true, 00:15:01.653 "data_offset": 0, 00:15:01.653 "data_size": 65536 00:15:01.653 }, 00:15:01.653 { 00:15:01.653 "name": "BaseBdev3", 00:15:01.653 "uuid": "9488662b-029c-5d6e-bdd5-901862ef9e45", 00:15:01.653 "is_configured": true, 00:15:01.653 "data_offset": 0, 00:15:01.653 "data_size": 65536 00:15:01.653 } 00:15:01.653 ] 00:15:01.653 }' 00:15:01.653 13:24:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:01.653 13:24:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:01.653 13:24:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:01.653 13:24:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:01.653 13:24:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:02.591 13:24:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:02.591 13:24:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:02.591 13:24:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:02.591 13:24:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:02.591 13:24:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:02.591 13:24:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:02.591 13:24:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.591 13:24:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:02.591 13:24:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.591 13:24:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.591 13:24:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.591 13:24:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:02.591 "name": "raid_bdev1", 00:15:02.591 "uuid": "657b6cb2-17fd-4d9d-b016-888d4a0358fc", 00:15:02.591 "strip_size_kb": 64, 00:15:02.591 "state": "online", 00:15:02.591 "raid_level": "raid5f", 00:15:02.591 "superblock": false, 00:15:02.591 "num_base_bdevs": 3, 00:15:02.591 "num_base_bdevs_discovered": 3, 00:15:02.591 "num_base_bdevs_operational": 3, 00:15:02.591 "process": { 00:15:02.591 "type": "rebuild", 00:15:02.591 "target": "spare", 00:15:02.591 "progress": { 00:15:02.591 "blocks": 69632, 00:15:02.591 "percent": 53 00:15:02.591 } 00:15:02.591 }, 00:15:02.591 "base_bdevs_list": [ 00:15:02.591 { 00:15:02.591 "name": "spare", 00:15:02.591 "uuid": "e840c98f-127b-52e5-845a-fd54c29fe0b1", 00:15:02.591 "is_configured": true, 00:15:02.591 "data_offset": 0, 00:15:02.591 "data_size": 65536 00:15:02.591 }, 00:15:02.591 { 00:15:02.591 "name": "BaseBdev2", 00:15:02.591 "uuid": "2a34b76a-7bdf-5de0-9312-52db2c23b809", 00:15:02.591 "is_configured": true, 00:15:02.591 "data_offset": 0, 00:15:02.591 "data_size": 65536 00:15:02.591 }, 00:15:02.591 { 00:15:02.591 "name": "BaseBdev3", 00:15:02.591 "uuid": "9488662b-029c-5d6e-bdd5-901862ef9e45", 00:15:02.591 "is_configured": true, 00:15:02.591 "data_offset": 0, 00:15:02.591 "data_size": 65536 00:15:02.591 } 00:15:02.591 ] 00:15:02.591 }' 00:15:02.591 13:24:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:02.591 13:24:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:02.591 13:24:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:02.591 13:24:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:02.591 13:24:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:03.527 13:24:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:03.527 13:24:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:03.527 13:24:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:03.787 13:24:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:03.787 13:24:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:03.787 13:24:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:03.787 13:24:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:03.787 13:24:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:03.787 13:24:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.787 13:24:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.787 13:24:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.787 13:24:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:03.787 "name": "raid_bdev1", 00:15:03.787 "uuid": "657b6cb2-17fd-4d9d-b016-888d4a0358fc", 00:15:03.787 "strip_size_kb": 64, 00:15:03.787 "state": "online", 00:15:03.787 "raid_level": "raid5f", 00:15:03.787 "superblock": false, 00:15:03.787 "num_base_bdevs": 3, 00:15:03.787 "num_base_bdevs_discovered": 3, 00:15:03.787 "num_base_bdevs_operational": 3, 00:15:03.787 "process": { 00:15:03.787 "type": "rebuild", 00:15:03.787 "target": "spare", 00:15:03.787 "progress": { 00:15:03.787 "blocks": 92160, 00:15:03.787 "percent": 70 00:15:03.787 } 00:15:03.787 }, 00:15:03.787 "base_bdevs_list": [ 00:15:03.787 { 00:15:03.787 "name": "spare", 00:15:03.787 "uuid": "e840c98f-127b-52e5-845a-fd54c29fe0b1", 00:15:03.787 "is_configured": true, 00:15:03.787 "data_offset": 0, 00:15:03.787 "data_size": 65536 00:15:03.787 }, 00:15:03.787 { 00:15:03.787 "name": "BaseBdev2", 00:15:03.787 "uuid": "2a34b76a-7bdf-5de0-9312-52db2c23b809", 00:15:03.787 "is_configured": true, 00:15:03.787 "data_offset": 0, 00:15:03.787 "data_size": 65536 00:15:03.787 }, 00:15:03.787 { 00:15:03.787 "name": "BaseBdev3", 00:15:03.787 "uuid": "9488662b-029c-5d6e-bdd5-901862ef9e45", 00:15:03.787 "is_configured": true, 00:15:03.787 "data_offset": 0, 00:15:03.787 "data_size": 65536 00:15:03.787 } 00:15:03.787 ] 00:15:03.787 }' 00:15:03.787 13:24:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:03.787 13:24:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:03.787 13:24:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:03.787 13:24:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:03.787 13:24:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:04.725 13:24:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:04.725 13:24:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:04.725 13:24:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:04.725 13:24:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:04.725 13:24:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:04.725 13:24:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:04.725 13:24:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:04.725 13:24:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.725 13:24:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.725 13:24:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:04.725 13:24:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.725 13:24:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:04.725 "name": "raid_bdev1", 00:15:04.725 "uuid": "657b6cb2-17fd-4d9d-b016-888d4a0358fc", 00:15:04.725 "strip_size_kb": 64, 00:15:04.725 "state": "online", 00:15:04.725 "raid_level": "raid5f", 00:15:04.725 "superblock": false, 00:15:04.725 "num_base_bdevs": 3, 00:15:04.725 "num_base_bdevs_discovered": 3, 00:15:04.725 "num_base_bdevs_operational": 3, 00:15:04.725 "process": { 00:15:04.725 "type": "rebuild", 00:15:04.725 "target": "spare", 00:15:04.725 "progress": { 00:15:04.725 "blocks": 114688, 00:15:04.725 "percent": 87 00:15:04.725 } 00:15:04.725 }, 00:15:04.725 "base_bdevs_list": [ 00:15:04.725 { 00:15:04.725 "name": "spare", 00:15:04.725 "uuid": "e840c98f-127b-52e5-845a-fd54c29fe0b1", 00:15:04.725 "is_configured": true, 00:15:04.725 "data_offset": 0, 00:15:04.725 "data_size": 65536 00:15:04.725 }, 00:15:04.725 { 00:15:04.725 "name": "BaseBdev2", 00:15:04.725 "uuid": "2a34b76a-7bdf-5de0-9312-52db2c23b809", 00:15:04.725 "is_configured": true, 00:15:04.725 "data_offset": 0, 00:15:04.725 "data_size": 65536 00:15:04.725 }, 00:15:04.725 { 00:15:04.725 "name": "BaseBdev3", 00:15:04.725 "uuid": "9488662b-029c-5d6e-bdd5-901862ef9e45", 00:15:04.725 "is_configured": true, 00:15:04.725 "data_offset": 0, 00:15:04.725 "data_size": 65536 00:15:04.725 } 00:15:04.725 ] 00:15:04.725 }' 00:15:04.725 13:24:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:04.984 13:24:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:04.984 13:24:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:04.984 13:24:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:04.984 13:24:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:05.553 [2024-11-17 13:24:54.618904] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:05.553 [2024-11-17 13:24:54.619078] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:05.553 [2024-11-17 13:24:54.619131] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:06.120 13:24:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:06.120 13:24:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:06.120 13:24:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:06.121 13:24:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:06.121 13:24:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:06.121 13:24:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:06.121 13:24:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:06.121 13:24:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:06.121 13:24:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.121 13:24:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.121 13:24:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.121 13:24:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:06.121 "name": "raid_bdev1", 00:15:06.121 "uuid": "657b6cb2-17fd-4d9d-b016-888d4a0358fc", 00:15:06.121 "strip_size_kb": 64, 00:15:06.121 "state": "online", 00:15:06.121 "raid_level": "raid5f", 00:15:06.121 "superblock": false, 00:15:06.121 "num_base_bdevs": 3, 00:15:06.121 "num_base_bdevs_discovered": 3, 00:15:06.121 "num_base_bdevs_operational": 3, 00:15:06.121 "base_bdevs_list": [ 00:15:06.121 { 00:15:06.121 "name": "spare", 00:15:06.121 "uuid": "e840c98f-127b-52e5-845a-fd54c29fe0b1", 00:15:06.121 "is_configured": true, 00:15:06.121 "data_offset": 0, 00:15:06.121 "data_size": 65536 00:15:06.121 }, 00:15:06.121 { 00:15:06.121 "name": "BaseBdev2", 00:15:06.121 "uuid": "2a34b76a-7bdf-5de0-9312-52db2c23b809", 00:15:06.121 "is_configured": true, 00:15:06.121 "data_offset": 0, 00:15:06.121 "data_size": 65536 00:15:06.121 }, 00:15:06.121 { 00:15:06.121 "name": "BaseBdev3", 00:15:06.121 "uuid": "9488662b-029c-5d6e-bdd5-901862ef9e45", 00:15:06.121 "is_configured": true, 00:15:06.121 "data_offset": 0, 00:15:06.121 "data_size": 65536 00:15:06.121 } 00:15:06.121 ] 00:15:06.121 }' 00:15:06.121 13:24:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:06.121 13:24:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:06.121 13:24:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:06.121 13:24:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:06.121 13:24:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:15:06.121 13:24:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:06.121 13:24:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:06.121 13:24:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:06.121 13:24:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:06.121 13:24:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:06.121 13:24:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:06.121 13:24:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:06.121 13:24:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.121 13:24:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.121 13:24:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.121 13:24:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:06.121 "name": "raid_bdev1", 00:15:06.121 "uuid": "657b6cb2-17fd-4d9d-b016-888d4a0358fc", 00:15:06.121 "strip_size_kb": 64, 00:15:06.121 "state": "online", 00:15:06.121 "raid_level": "raid5f", 00:15:06.121 "superblock": false, 00:15:06.121 "num_base_bdevs": 3, 00:15:06.121 "num_base_bdevs_discovered": 3, 00:15:06.121 "num_base_bdevs_operational": 3, 00:15:06.121 "base_bdevs_list": [ 00:15:06.121 { 00:15:06.121 "name": "spare", 00:15:06.121 "uuid": "e840c98f-127b-52e5-845a-fd54c29fe0b1", 00:15:06.121 "is_configured": true, 00:15:06.121 "data_offset": 0, 00:15:06.121 "data_size": 65536 00:15:06.121 }, 00:15:06.121 { 00:15:06.121 "name": "BaseBdev2", 00:15:06.121 "uuid": "2a34b76a-7bdf-5de0-9312-52db2c23b809", 00:15:06.121 "is_configured": true, 00:15:06.121 "data_offset": 0, 00:15:06.121 "data_size": 65536 00:15:06.121 }, 00:15:06.121 { 00:15:06.121 "name": "BaseBdev3", 00:15:06.121 "uuid": "9488662b-029c-5d6e-bdd5-901862ef9e45", 00:15:06.121 "is_configured": true, 00:15:06.121 "data_offset": 0, 00:15:06.121 "data_size": 65536 00:15:06.121 } 00:15:06.121 ] 00:15:06.121 }' 00:15:06.121 13:24:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:06.121 13:24:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:06.121 13:24:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:06.121 13:24:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:06.121 13:24:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:06.121 13:24:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:06.121 13:24:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:06.121 13:24:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:06.121 13:24:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:06.121 13:24:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:06.121 13:24:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:06.121 13:24:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:06.121 13:24:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:06.121 13:24:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:06.121 13:24:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:06.121 13:24:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.121 13:24:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.121 13:24:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:06.380 13:24:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.380 13:24:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:06.380 "name": "raid_bdev1", 00:15:06.380 "uuid": "657b6cb2-17fd-4d9d-b016-888d4a0358fc", 00:15:06.380 "strip_size_kb": 64, 00:15:06.380 "state": "online", 00:15:06.380 "raid_level": "raid5f", 00:15:06.380 "superblock": false, 00:15:06.380 "num_base_bdevs": 3, 00:15:06.380 "num_base_bdevs_discovered": 3, 00:15:06.380 "num_base_bdevs_operational": 3, 00:15:06.380 "base_bdevs_list": [ 00:15:06.380 { 00:15:06.380 "name": "spare", 00:15:06.380 "uuid": "e840c98f-127b-52e5-845a-fd54c29fe0b1", 00:15:06.380 "is_configured": true, 00:15:06.380 "data_offset": 0, 00:15:06.380 "data_size": 65536 00:15:06.380 }, 00:15:06.380 { 00:15:06.380 "name": "BaseBdev2", 00:15:06.380 "uuid": "2a34b76a-7bdf-5de0-9312-52db2c23b809", 00:15:06.380 "is_configured": true, 00:15:06.380 "data_offset": 0, 00:15:06.380 "data_size": 65536 00:15:06.380 }, 00:15:06.380 { 00:15:06.380 "name": "BaseBdev3", 00:15:06.380 "uuid": "9488662b-029c-5d6e-bdd5-901862ef9e45", 00:15:06.380 "is_configured": true, 00:15:06.380 "data_offset": 0, 00:15:06.380 "data_size": 65536 00:15:06.380 } 00:15:06.380 ] 00:15:06.380 }' 00:15:06.380 13:24:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:06.380 13:24:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.640 13:24:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:06.640 13:24:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.640 13:24:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.640 [2024-11-17 13:24:55.778834] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:06.640 [2024-11-17 13:24:55.778920] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:06.640 [2024-11-17 13:24:55.779039] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:06.640 [2024-11-17 13:24:55.779205] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:06.640 [2024-11-17 13:24:55.779287] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:06.640 13:24:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.640 13:24:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:06.640 13:24:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.640 13:24:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.640 13:24:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:15:06.640 13:24:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.640 13:24:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:06.640 13:24:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:06.640 13:24:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:15:06.640 13:24:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:15:06.640 13:24:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:06.640 13:24:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:15:06.640 13:24:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:06.640 13:24:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:06.640 13:24:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:06.640 13:24:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:15:06.640 13:24:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:06.640 13:24:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:06.640 13:24:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:15:06.900 /dev/nbd0 00:15:06.900 13:24:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:06.900 13:24:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:06.900 13:24:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:06.900 13:24:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:15:06.900 13:24:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:06.900 13:24:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:06.900 13:24:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:06.900 13:24:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:15:06.900 13:24:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:06.900 13:24:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:06.900 13:24:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:06.900 1+0 records in 00:15:06.900 1+0 records out 00:15:06.900 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000354749 s, 11.5 MB/s 00:15:06.900 13:24:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:06.900 13:24:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:15:06.900 13:24:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:06.900 13:24:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:06.900 13:24:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:15:06.900 13:24:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:06.900 13:24:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:06.900 13:24:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:15:07.159 /dev/nbd1 00:15:07.159 13:24:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:07.159 13:24:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:07.159 13:24:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:07.159 13:24:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:15:07.159 13:24:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:07.159 13:24:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:07.159 13:24:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:07.160 13:24:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:15:07.160 13:24:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:07.160 13:24:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:07.160 13:24:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:07.160 1+0 records in 00:15:07.160 1+0 records out 00:15:07.160 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000448088 s, 9.1 MB/s 00:15:07.160 13:24:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:07.160 13:24:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:15:07.160 13:24:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:07.160 13:24:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:07.160 13:24:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:15:07.160 13:24:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:07.160 13:24:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:07.160 13:24:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:15:07.418 13:24:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:15:07.418 13:24:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:07.418 13:24:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:07.418 13:24:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:07.418 13:24:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:15:07.418 13:24:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:07.418 13:24:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:07.677 13:24:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:07.677 13:24:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:07.678 13:24:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:07.678 13:24:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:07.678 13:24:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:07.678 13:24:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:07.678 13:24:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:07.678 13:24:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:07.678 13:24:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:07.678 13:24:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:07.938 13:24:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:07.938 13:24:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:07.938 13:24:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:07.938 13:24:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:07.938 13:24:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:07.938 13:24:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:07.938 13:24:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:07.938 13:24:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:07.938 13:24:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:15:07.938 13:24:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 81455 00:15:07.938 13:24:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 81455 ']' 00:15:07.938 13:24:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 81455 00:15:07.938 13:24:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:15:07.938 13:24:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:07.938 13:24:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81455 00:15:07.938 13:24:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:07.938 killing process with pid 81455 00:15:07.938 Received shutdown signal, test time was about 60.000000 seconds 00:15:07.938 00:15:07.938 Latency(us) 00:15:07.938 [2024-11-17T13:24:57.162Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:07.938 [2024-11-17T13:24:57.162Z] =================================================================================================================== 00:15:07.938 [2024-11-17T13:24:57.162Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:07.938 13:24:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:07.938 13:24:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81455' 00:15:07.938 13:24:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@973 -- # kill 81455 00:15:07.938 [2024-11-17 13:24:57.043806] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:07.938 13:24:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@978 -- # wait 81455 00:15:08.509 [2024-11-17 13:24:57.428483] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:09.447 ************************************ 00:15:09.447 END TEST raid5f_rebuild_test 00:15:09.447 ************************************ 00:15:09.447 13:24:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:15:09.447 00:15:09.447 real 0m15.229s 00:15:09.447 user 0m18.669s 00:15:09.447 sys 0m2.033s 00:15:09.447 13:24:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:09.447 13:24:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.447 13:24:58 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 3 true false true 00:15:09.447 13:24:58 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:15:09.447 13:24:58 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:09.447 13:24:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:09.447 ************************************ 00:15:09.447 START TEST raid5f_rebuild_test_sb 00:15:09.447 ************************************ 00:15:09.447 13:24:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 3 true false true 00:15:09.447 13:24:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:15:09.447 13:24:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:15:09.447 13:24:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:15:09.447 13:24:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:15:09.447 13:24:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:09.447 13:24:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:09.447 13:24:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:09.447 13:24:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:09.447 13:24:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:09.447 13:24:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:09.447 13:24:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:09.447 13:24:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:09.447 13:24:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:09.447 13:24:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:15:09.447 13:24:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:09.447 13:24:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:09.447 13:24:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:09.447 13:24:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:09.448 13:24:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:09.448 13:24:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:09.448 13:24:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:09.448 13:24:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:09.448 13:24:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:09.448 13:24:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:15:09.448 13:24:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:15:09.448 13:24:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:15:09.448 13:24:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:15:09.448 13:24:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:15:09.448 13:24:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:15:09.448 13:24:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=81885 00:15:09.448 13:24:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:09.448 13:24:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 81885 00:15:09.448 13:24:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 81885 ']' 00:15:09.448 13:24:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:09.448 13:24:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:09.448 13:24:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:09.448 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:09.448 13:24:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:09.448 13:24:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:09.448 [2024-11-17 13:24:58.661750] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:15:09.448 [2024-11-17 13:24:58.661918] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:15:09.448 Zero copy mechanism will not be used. 00:15:09.448 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81885 ] 00:15:09.707 [2024-11-17 13:24:58.839727] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:09.966 [2024-11-17 13:24:58.973609] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:10.226 [2024-11-17 13:24:59.207056] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:10.226 [2024-11-17 13:24:59.207180] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:10.485 13:24:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:10.485 13:24:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:15:10.485 13:24:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:10.485 13:24:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:10.485 13:24:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.485 13:24:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:10.485 BaseBdev1_malloc 00:15:10.485 13:24:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.485 13:24:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:10.485 13:24:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.485 13:24:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:10.485 [2024-11-17 13:24:59.593927] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:10.485 [2024-11-17 13:24:59.594042] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:10.485 [2024-11-17 13:24:59.594072] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:10.485 [2024-11-17 13:24:59.594085] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:10.485 [2024-11-17 13:24:59.596453] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:10.485 [2024-11-17 13:24:59.596492] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:10.486 BaseBdev1 00:15:10.486 13:24:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.486 13:24:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:10.486 13:24:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:10.486 13:24:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.486 13:24:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:10.486 BaseBdev2_malloc 00:15:10.486 13:24:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.486 13:24:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:10.486 13:24:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.486 13:24:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:10.486 [2024-11-17 13:24:59.655338] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:10.486 [2024-11-17 13:24:59.655413] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:10.486 [2024-11-17 13:24:59.655450] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:10.486 [2024-11-17 13:24:59.655462] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:10.486 [2024-11-17 13:24:59.657809] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:10.486 [2024-11-17 13:24:59.657857] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:10.486 BaseBdev2 00:15:10.486 13:24:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.486 13:24:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:10.486 13:24:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:10.486 13:24:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.486 13:24:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:10.788 BaseBdev3_malloc 00:15:10.788 13:24:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.788 13:24:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:15:10.788 13:24:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.788 13:24:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:10.788 [2024-11-17 13:24:59.750740] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:15:10.788 [2024-11-17 13:24:59.750801] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:10.788 [2024-11-17 13:24:59.750843] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:10.788 [2024-11-17 13:24:59.750855] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:10.788 [2024-11-17 13:24:59.753185] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:10.788 [2024-11-17 13:24:59.753231] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:10.788 BaseBdev3 00:15:10.788 13:24:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.788 13:24:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:10.788 13:24:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.788 13:24:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:10.788 spare_malloc 00:15:10.788 13:24:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.788 13:24:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:10.788 13:24:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.788 13:24:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:10.788 spare_delay 00:15:10.788 13:24:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.788 13:24:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:10.788 13:24:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.788 13:24:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:10.788 [2024-11-17 13:24:59.824300] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:10.788 [2024-11-17 13:24:59.824459] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:10.788 [2024-11-17 13:24:59.824481] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:15:10.788 [2024-11-17 13:24:59.824493] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:10.788 [2024-11-17 13:24:59.826910] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:10.788 [2024-11-17 13:24:59.826957] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:10.788 spare 00:15:10.788 13:24:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.788 13:24:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:15:10.788 13:24:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.788 13:24:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:10.788 [2024-11-17 13:24:59.836356] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:10.788 [2024-11-17 13:24:59.838416] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:10.788 [2024-11-17 13:24:59.838534] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:10.788 [2024-11-17 13:24:59.838733] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:10.788 [2024-11-17 13:24:59.838750] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:10.788 [2024-11-17 13:24:59.838999] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:10.788 [2024-11-17 13:24:59.844742] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:10.788 [2024-11-17 13:24:59.844766] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:10.788 [2024-11-17 13:24:59.844940] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:10.788 13:24:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.788 13:24:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:10.788 13:24:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:10.788 13:24:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:10.788 13:24:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:10.788 13:24:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:10.788 13:24:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:10.788 13:24:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:10.788 13:24:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:10.788 13:24:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:10.788 13:24:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:10.788 13:24:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:10.788 13:24:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.788 13:24:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:10.788 13:24:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:10.788 13:24:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.788 13:24:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:10.788 "name": "raid_bdev1", 00:15:10.788 "uuid": "18887443-29b6-4124-891f-31e03cbb7edd", 00:15:10.788 "strip_size_kb": 64, 00:15:10.788 "state": "online", 00:15:10.788 "raid_level": "raid5f", 00:15:10.788 "superblock": true, 00:15:10.788 "num_base_bdevs": 3, 00:15:10.788 "num_base_bdevs_discovered": 3, 00:15:10.788 "num_base_bdevs_operational": 3, 00:15:10.788 "base_bdevs_list": [ 00:15:10.788 { 00:15:10.788 "name": "BaseBdev1", 00:15:10.788 "uuid": "52602af2-3a25-5509-8f37-572b6e1a111c", 00:15:10.788 "is_configured": true, 00:15:10.788 "data_offset": 2048, 00:15:10.788 "data_size": 63488 00:15:10.788 }, 00:15:10.788 { 00:15:10.788 "name": "BaseBdev2", 00:15:10.788 "uuid": "14faa14d-5105-5f59-a622-012b8353326c", 00:15:10.788 "is_configured": true, 00:15:10.788 "data_offset": 2048, 00:15:10.788 "data_size": 63488 00:15:10.788 }, 00:15:10.788 { 00:15:10.788 "name": "BaseBdev3", 00:15:10.788 "uuid": "b43f4a3d-51a1-5768-9c82-2a050b26d8e3", 00:15:10.788 "is_configured": true, 00:15:10.788 "data_offset": 2048, 00:15:10.788 "data_size": 63488 00:15:10.788 } 00:15:10.788 ] 00:15:10.788 }' 00:15:10.788 13:24:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:10.788 13:24:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:11.358 13:25:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:11.358 13:25:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.358 13:25:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:11.358 13:25:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:11.358 [2024-11-17 13:25:00.283513] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:11.358 13:25:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.358 13:25:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=126976 00:15:11.359 13:25:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:11.359 13:25:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:11.359 13:25:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.359 13:25:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:11.359 13:25:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.359 13:25:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:15:11.359 13:25:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:15:11.359 13:25:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:15:11.359 13:25:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:15:11.359 13:25:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:15:11.359 13:25:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:11.359 13:25:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:15:11.359 13:25:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:11.359 13:25:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:11.359 13:25:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:11.359 13:25:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:15:11.359 13:25:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:11.359 13:25:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:11.359 13:25:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:15:11.359 [2024-11-17 13:25:00.550847] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:15:11.359 /dev/nbd0 00:15:11.619 13:25:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:11.619 13:25:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:11.619 13:25:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:11.619 13:25:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:15:11.619 13:25:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:11.619 13:25:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:11.619 13:25:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:11.619 13:25:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:15:11.619 13:25:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:11.619 13:25:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:11.619 13:25:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:11.619 1+0 records in 00:15:11.619 1+0 records out 00:15:11.619 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000444991 s, 9.2 MB/s 00:15:11.619 13:25:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:11.619 13:25:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:15:11.619 13:25:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:11.619 13:25:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:11.619 13:25:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:15:11.619 13:25:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:11.619 13:25:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:11.619 13:25:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:15:11.619 13:25:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:15:11.619 13:25:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 128 00:15:11.619 13:25:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=496 oflag=direct 00:15:11.878 496+0 records in 00:15:11.878 496+0 records out 00:15:11.878 65011712 bytes (65 MB, 62 MiB) copied, 0.389319 s, 167 MB/s 00:15:11.878 13:25:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:11.878 13:25:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:11.879 13:25:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:11.879 13:25:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:11.879 13:25:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:15:11.879 13:25:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:11.879 13:25:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:12.139 13:25:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:12.139 [2024-11-17 13:25:01.238094] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:12.139 13:25:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:12.139 13:25:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:12.139 13:25:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:12.139 13:25:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:12.139 13:25:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:12.139 13:25:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:12.139 13:25:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:12.139 13:25:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:12.139 13:25:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.139 13:25:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:12.139 [2024-11-17 13:25:01.255499] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:12.139 13:25:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.139 13:25:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:12.139 13:25:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:12.139 13:25:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:12.139 13:25:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:12.139 13:25:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:12.139 13:25:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:12.139 13:25:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:12.139 13:25:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:12.139 13:25:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:12.139 13:25:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:12.139 13:25:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:12.139 13:25:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:12.139 13:25:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.139 13:25:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:12.139 13:25:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.139 13:25:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:12.139 "name": "raid_bdev1", 00:15:12.139 "uuid": "18887443-29b6-4124-891f-31e03cbb7edd", 00:15:12.139 "strip_size_kb": 64, 00:15:12.139 "state": "online", 00:15:12.139 "raid_level": "raid5f", 00:15:12.139 "superblock": true, 00:15:12.139 "num_base_bdevs": 3, 00:15:12.139 "num_base_bdevs_discovered": 2, 00:15:12.139 "num_base_bdevs_operational": 2, 00:15:12.139 "base_bdevs_list": [ 00:15:12.139 { 00:15:12.139 "name": null, 00:15:12.139 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:12.139 "is_configured": false, 00:15:12.139 "data_offset": 0, 00:15:12.139 "data_size": 63488 00:15:12.139 }, 00:15:12.139 { 00:15:12.139 "name": "BaseBdev2", 00:15:12.139 "uuid": "14faa14d-5105-5f59-a622-012b8353326c", 00:15:12.139 "is_configured": true, 00:15:12.139 "data_offset": 2048, 00:15:12.139 "data_size": 63488 00:15:12.139 }, 00:15:12.139 { 00:15:12.139 "name": "BaseBdev3", 00:15:12.139 "uuid": "b43f4a3d-51a1-5768-9c82-2a050b26d8e3", 00:15:12.139 "is_configured": true, 00:15:12.139 "data_offset": 2048, 00:15:12.139 "data_size": 63488 00:15:12.139 } 00:15:12.139 ] 00:15:12.139 }' 00:15:12.139 13:25:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:12.139 13:25:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:12.706 13:25:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:12.706 13:25:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.706 13:25:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:12.706 [2024-11-17 13:25:01.666798] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:12.706 [2024-11-17 13:25:01.683856] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000028f80 00:15:12.706 13:25:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.706 13:25:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:12.706 [2024-11-17 13:25:01.691901] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:13.643 13:25:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:13.643 13:25:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:13.643 13:25:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:13.643 13:25:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:13.643 13:25:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:13.643 13:25:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:13.643 13:25:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:13.643 13:25:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.643 13:25:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:13.643 13:25:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.643 13:25:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:13.643 "name": "raid_bdev1", 00:15:13.643 "uuid": "18887443-29b6-4124-891f-31e03cbb7edd", 00:15:13.643 "strip_size_kb": 64, 00:15:13.643 "state": "online", 00:15:13.643 "raid_level": "raid5f", 00:15:13.644 "superblock": true, 00:15:13.644 "num_base_bdevs": 3, 00:15:13.644 "num_base_bdevs_discovered": 3, 00:15:13.644 "num_base_bdevs_operational": 3, 00:15:13.644 "process": { 00:15:13.644 "type": "rebuild", 00:15:13.644 "target": "spare", 00:15:13.644 "progress": { 00:15:13.644 "blocks": 18432, 00:15:13.644 "percent": 14 00:15:13.644 } 00:15:13.644 }, 00:15:13.644 "base_bdevs_list": [ 00:15:13.644 { 00:15:13.644 "name": "spare", 00:15:13.644 "uuid": "c34dcaa6-95c3-5170-9ce3-30bf0a43d325", 00:15:13.644 "is_configured": true, 00:15:13.644 "data_offset": 2048, 00:15:13.644 "data_size": 63488 00:15:13.644 }, 00:15:13.644 { 00:15:13.644 "name": "BaseBdev2", 00:15:13.644 "uuid": "14faa14d-5105-5f59-a622-012b8353326c", 00:15:13.644 "is_configured": true, 00:15:13.644 "data_offset": 2048, 00:15:13.644 "data_size": 63488 00:15:13.644 }, 00:15:13.644 { 00:15:13.644 "name": "BaseBdev3", 00:15:13.644 "uuid": "b43f4a3d-51a1-5768-9c82-2a050b26d8e3", 00:15:13.644 "is_configured": true, 00:15:13.644 "data_offset": 2048, 00:15:13.644 "data_size": 63488 00:15:13.644 } 00:15:13.644 ] 00:15:13.644 }' 00:15:13.644 13:25:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:13.644 13:25:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:13.644 13:25:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:13.644 13:25:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:13.644 13:25:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:13.644 13:25:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.644 13:25:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:13.644 [2024-11-17 13:25:02.819123] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:13.903 [2024-11-17 13:25:02.902192] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:13.903 [2024-11-17 13:25:02.902337] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:13.903 [2024-11-17 13:25:02.902382] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:13.903 [2024-11-17 13:25:02.902407] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:13.903 13:25:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.903 13:25:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:13.903 13:25:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:13.903 13:25:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:13.903 13:25:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:13.903 13:25:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:13.903 13:25:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:13.903 13:25:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:13.903 13:25:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:13.903 13:25:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:13.903 13:25:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:13.903 13:25:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:13.903 13:25:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:13.904 13:25:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.904 13:25:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:13.904 13:25:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.904 13:25:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:13.904 "name": "raid_bdev1", 00:15:13.904 "uuid": "18887443-29b6-4124-891f-31e03cbb7edd", 00:15:13.904 "strip_size_kb": 64, 00:15:13.904 "state": "online", 00:15:13.904 "raid_level": "raid5f", 00:15:13.904 "superblock": true, 00:15:13.904 "num_base_bdevs": 3, 00:15:13.904 "num_base_bdevs_discovered": 2, 00:15:13.904 "num_base_bdevs_operational": 2, 00:15:13.904 "base_bdevs_list": [ 00:15:13.904 { 00:15:13.904 "name": null, 00:15:13.904 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:13.904 "is_configured": false, 00:15:13.904 "data_offset": 0, 00:15:13.904 "data_size": 63488 00:15:13.904 }, 00:15:13.904 { 00:15:13.904 "name": "BaseBdev2", 00:15:13.904 "uuid": "14faa14d-5105-5f59-a622-012b8353326c", 00:15:13.904 "is_configured": true, 00:15:13.904 "data_offset": 2048, 00:15:13.904 "data_size": 63488 00:15:13.904 }, 00:15:13.904 { 00:15:13.904 "name": "BaseBdev3", 00:15:13.904 "uuid": "b43f4a3d-51a1-5768-9c82-2a050b26d8e3", 00:15:13.904 "is_configured": true, 00:15:13.904 "data_offset": 2048, 00:15:13.904 "data_size": 63488 00:15:13.904 } 00:15:13.904 ] 00:15:13.904 }' 00:15:13.904 13:25:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:13.904 13:25:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:14.163 13:25:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:14.163 13:25:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:14.163 13:25:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:14.163 13:25:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:14.163 13:25:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:14.163 13:25:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:14.163 13:25:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:14.163 13:25:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.163 13:25:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:14.163 13:25:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.422 13:25:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:14.422 "name": "raid_bdev1", 00:15:14.422 "uuid": "18887443-29b6-4124-891f-31e03cbb7edd", 00:15:14.422 "strip_size_kb": 64, 00:15:14.422 "state": "online", 00:15:14.422 "raid_level": "raid5f", 00:15:14.422 "superblock": true, 00:15:14.423 "num_base_bdevs": 3, 00:15:14.423 "num_base_bdevs_discovered": 2, 00:15:14.423 "num_base_bdevs_operational": 2, 00:15:14.423 "base_bdevs_list": [ 00:15:14.423 { 00:15:14.423 "name": null, 00:15:14.423 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:14.423 "is_configured": false, 00:15:14.423 "data_offset": 0, 00:15:14.423 "data_size": 63488 00:15:14.423 }, 00:15:14.423 { 00:15:14.423 "name": "BaseBdev2", 00:15:14.423 "uuid": "14faa14d-5105-5f59-a622-012b8353326c", 00:15:14.423 "is_configured": true, 00:15:14.423 "data_offset": 2048, 00:15:14.423 "data_size": 63488 00:15:14.423 }, 00:15:14.423 { 00:15:14.423 "name": "BaseBdev3", 00:15:14.423 "uuid": "b43f4a3d-51a1-5768-9c82-2a050b26d8e3", 00:15:14.423 "is_configured": true, 00:15:14.423 "data_offset": 2048, 00:15:14.423 "data_size": 63488 00:15:14.423 } 00:15:14.423 ] 00:15:14.423 }' 00:15:14.423 13:25:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:14.423 13:25:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:14.423 13:25:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:14.423 13:25:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:14.423 13:25:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:14.423 13:25:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.423 13:25:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:14.423 [2024-11-17 13:25:03.485174] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:14.423 [2024-11-17 13:25:03.501969] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000029050 00:15:14.423 13:25:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.423 13:25:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:14.423 [2024-11-17 13:25:03.510069] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:15.362 13:25:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:15.362 13:25:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:15.362 13:25:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:15.362 13:25:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:15.362 13:25:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:15.362 13:25:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:15.362 13:25:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:15.362 13:25:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.362 13:25:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:15.362 13:25:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.362 13:25:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:15.362 "name": "raid_bdev1", 00:15:15.362 "uuid": "18887443-29b6-4124-891f-31e03cbb7edd", 00:15:15.362 "strip_size_kb": 64, 00:15:15.362 "state": "online", 00:15:15.362 "raid_level": "raid5f", 00:15:15.362 "superblock": true, 00:15:15.362 "num_base_bdevs": 3, 00:15:15.362 "num_base_bdevs_discovered": 3, 00:15:15.362 "num_base_bdevs_operational": 3, 00:15:15.362 "process": { 00:15:15.362 "type": "rebuild", 00:15:15.362 "target": "spare", 00:15:15.362 "progress": { 00:15:15.362 "blocks": 18432, 00:15:15.362 "percent": 14 00:15:15.362 } 00:15:15.362 }, 00:15:15.362 "base_bdevs_list": [ 00:15:15.362 { 00:15:15.362 "name": "spare", 00:15:15.362 "uuid": "c34dcaa6-95c3-5170-9ce3-30bf0a43d325", 00:15:15.362 "is_configured": true, 00:15:15.362 "data_offset": 2048, 00:15:15.362 "data_size": 63488 00:15:15.362 }, 00:15:15.362 { 00:15:15.362 "name": "BaseBdev2", 00:15:15.362 "uuid": "14faa14d-5105-5f59-a622-012b8353326c", 00:15:15.362 "is_configured": true, 00:15:15.362 "data_offset": 2048, 00:15:15.362 "data_size": 63488 00:15:15.362 }, 00:15:15.362 { 00:15:15.362 "name": "BaseBdev3", 00:15:15.362 "uuid": "b43f4a3d-51a1-5768-9c82-2a050b26d8e3", 00:15:15.362 "is_configured": true, 00:15:15.362 "data_offset": 2048, 00:15:15.362 "data_size": 63488 00:15:15.362 } 00:15:15.362 ] 00:15:15.362 }' 00:15:15.362 13:25:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:15.622 13:25:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:15.622 13:25:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:15.622 13:25:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:15.622 13:25:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:15:15.622 13:25:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:15:15.622 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:15:15.622 13:25:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:15:15.622 13:25:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:15:15.622 13:25:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=554 00:15:15.622 13:25:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:15.622 13:25:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:15.622 13:25:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:15.622 13:25:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:15.622 13:25:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:15.622 13:25:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:15.622 13:25:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:15.622 13:25:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.622 13:25:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:15.622 13:25:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:15.622 13:25:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.622 13:25:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:15.622 "name": "raid_bdev1", 00:15:15.622 "uuid": "18887443-29b6-4124-891f-31e03cbb7edd", 00:15:15.622 "strip_size_kb": 64, 00:15:15.622 "state": "online", 00:15:15.622 "raid_level": "raid5f", 00:15:15.622 "superblock": true, 00:15:15.622 "num_base_bdevs": 3, 00:15:15.622 "num_base_bdevs_discovered": 3, 00:15:15.622 "num_base_bdevs_operational": 3, 00:15:15.622 "process": { 00:15:15.622 "type": "rebuild", 00:15:15.622 "target": "spare", 00:15:15.622 "progress": { 00:15:15.622 "blocks": 22528, 00:15:15.622 "percent": 17 00:15:15.622 } 00:15:15.622 }, 00:15:15.622 "base_bdevs_list": [ 00:15:15.622 { 00:15:15.622 "name": "spare", 00:15:15.622 "uuid": "c34dcaa6-95c3-5170-9ce3-30bf0a43d325", 00:15:15.622 "is_configured": true, 00:15:15.622 "data_offset": 2048, 00:15:15.622 "data_size": 63488 00:15:15.622 }, 00:15:15.622 { 00:15:15.622 "name": "BaseBdev2", 00:15:15.622 "uuid": "14faa14d-5105-5f59-a622-012b8353326c", 00:15:15.622 "is_configured": true, 00:15:15.622 "data_offset": 2048, 00:15:15.622 "data_size": 63488 00:15:15.622 }, 00:15:15.622 { 00:15:15.622 "name": "BaseBdev3", 00:15:15.622 "uuid": "b43f4a3d-51a1-5768-9c82-2a050b26d8e3", 00:15:15.622 "is_configured": true, 00:15:15.622 "data_offset": 2048, 00:15:15.622 "data_size": 63488 00:15:15.622 } 00:15:15.622 ] 00:15:15.622 }' 00:15:15.622 13:25:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:15.622 13:25:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:15.622 13:25:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:15.622 13:25:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:15.622 13:25:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:16.559 13:25:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:16.559 13:25:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:16.559 13:25:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:16.559 13:25:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:16.559 13:25:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:16.559 13:25:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:16.821 13:25:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:16.821 13:25:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:16.821 13:25:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.821 13:25:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:16.821 13:25:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.821 13:25:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:16.821 "name": "raid_bdev1", 00:15:16.821 "uuid": "18887443-29b6-4124-891f-31e03cbb7edd", 00:15:16.821 "strip_size_kb": 64, 00:15:16.821 "state": "online", 00:15:16.821 "raid_level": "raid5f", 00:15:16.821 "superblock": true, 00:15:16.821 "num_base_bdevs": 3, 00:15:16.821 "num_base_bdevs_discovered": 3, 00:15:16.821 "num_base_bdevs_operational": 3, 00:15:16.821 "process": { 00:15:16.821 "type": "rebuild", 00:15:16.821 "target": "spare", 00:15:16.821 "progress": { 00:15:16.821 "blocks": 45056, 00:15:16.821 "percent": 35 00:15:16.821 } 00:15:16.821 }, 00:15:16.821 "base_bdevs_list": [ 00:15:16.821 { 00:15:16.821 "name": "spare", 00:15:16.821 "uuid": "c34dcaa6-95c3-5170-9ce3-30bf0a43d325", 00:15:16.821 "is_configured": true, 00:15:16.821 "data_offset": 2048, 00:15:16.821 "data_size": 63488 00:15:16.821 }, 00:15:16.821 { 00:15:16.821 "name": "BaseBdev2", 00:15:16.821 "uuid": "14faa14d-5105-5f59-a622-012b8353326c", 00:15:16.821 "is_configured": true, 00:15:16.821 "data_offset": 2048, 00:15:16.821 "data_size": 63488 00:15:16.821 }, 00:15:16.821 { 00:15:16.821 "name": "BaseBdev3", 00:15:16.821 "uuid": "b43f4a3d-51a1-5768-9c82-2a050b26d8e3", 00:15:16.821 "is_configured": true, 00:15:16.821 "data_offset": 2048, 00:15:16.821 "data_size": 63488 00:15:16.821 } 00:15:16.821 ] 00:15:16.821 }' 00:15:16.821 13:25:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:16.821 13:25:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:16.821 13:25:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:16.821 13:25:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:16.821 13:25:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:17.757 13:25:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:17.757 13:25:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:17.757 13:25:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:17.757 13:25:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:17.757 13:25:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:17.757 13:25:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:17.757 13:25:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:17.757 13:25:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.757 13:25:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:17.757 13:25:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:17.757 13:25:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.017 13:25:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:18.017 "name": "raid_bdev1", 00:15:18.017 "uuid": "18887443-29b6-4124-891f-31e03cbb7edd", 00:15:18.017 "strip_size_kb": 64, 00:15:18.017 "state": "online", 00:15:18.017 "raid_level": "raid5f", 00:15:18.017 "superblock": true, 00:15:18.017 "num_base_bdevs": 3, 00:15:18.017 "num_base_bdevs_discovered": 3, 00:15:18.017 "num_base_bdevs_operational": 3, 00:15:18.017 "process": { 00:15:18.017 "type": "rebuild", 00:15:18.017 "target": "spare", 00:15:18.017 "progress": { 00:15:18.017 "blocks": 67584, 00:15:18.017 "percent": 53 00:15:18.017 } 00:15:18.017 }, 00:15:18.017 "base_bdevs_list": [ 00:15:18.017 { 00:15:18.017 "name": "spare", 00:15:18.017 "uuid": "c34dcaa6-95c3-5170-9ce3-30bf0a43d325", 00:15:18.017 "is_configured": true, 00:15:18.017 "data_offset": 2048, 00:15:18.017 "data_size": 63488 00:15:18.017 }, 00:15:18.017 { 00:15:18.017 "name": "BaseBdev2", 00:15:18.017 "uuid": "14faa14d-5105-5f59-a622-012b8353326c", 00:15:18.017 "is_configured": true, 00:15:18.017 "data_offset": 2048, 00:15:18.017 "data_size": 63488 00:15:18.017 }, 00:15:18.018 { 00:15:18.018 "name": "BaseBdev3", 00:15:18.018 "uuid": "b43f4a3d-51a1-5768-9c82-2a050b26d8e3", 00:15:18.018 "is_configured": true, 00:15:18.018 "data_offset": 2048, 00:15:18.018 "data_size": 63488 00:15:18.018 } 00:15:18.018 ] 00:15:18.018 }' 00:15:18.018 13:25:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:18.018 13:25:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:18.018 13:25:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:18.018 13:25:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:18.018 13:25:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:18.953 13:25:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:18.953 13:25:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:18.953 13:25:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:18.953 13:25:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:18.953 13:25:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:18.953 13:25:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:18.953 13:25:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:18.953 13:25:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:18.953 13:25:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.953 13:25:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:18.953 13:25:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.953 13:25:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:18.953 "name": "raid_bdev1", 00:15:18.953 "uuid": "18887443-29b6-4124-891f-31e03cbb7edd", 00:15:18.953 "strip_size_kb": 64, 00:15:18.953 "state": "online", 00:15:18.953 "raid_level": "raid5f", 00:15:18.953 "superblock": true, 00:15:18.953 "num_base_bdevs": 3, 00:15:18.953 "num_base_bdevs_discovered": 3, 00:15:18.953 "num_base_bdevs_operational": 3, 00:15:18.953 "process": { 00:15:18.953 "type": "rebuild", 00:15:18.953 "target": "spare", 00:15:18.953 "progress": { 00:15:18.953 "blocks": 92160, 00:15:18.953 "percent": 72 00:15:18.953 } 00:15:18.953 }, 00:15:18.953 "base_bdevs_list": [ 00:15:18.953 { 00:15:18.953 "name": "spare", 00:15:18.953 "uuid": "c34dcaa6-95c3-5170-9ce3-30bf0a43d325", 00:15:18.953 "is_configured": true, 00:15:18.953 "data_offset": 2048, 00:15:18.953 "data_size": 63488 00:15:18.954 }, 00:15:18.954 { 00:15:18.954 "name": "BaseBdev2", 00:15:18.954 "uuid": "14faa14d-5105-5f59-a622-012b8353326c", 00:15:18.954 "is_configured": true, 00:15:18.954 "data_offset": 2048, 00:15:18.954 "data_size": 63488 00:15:18.954 }, 00:15:18.954 { 00:15:18.954 "name": "BaseBdev3", 00:15:18.954 "uuid": "b43f4a3d-51a1-5768-9c82-2a050b26d8e3", 00:15:18.954 "is_configured": true, 00:15:18.954 "data_offset": 2048, 00:15:18.954 "data_size": 63488 00:15:18.954 } 00:15:18.954 ] 00:15:18.954 }' 00:15:18.954 13:25:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:19.213 13:25:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:19.213 13:25:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:19.213 13:25:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:19.213 13:25:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:20.161 13:25:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:20.161 13:25:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:20.161 13:25:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:20.161 13:25:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:20.161 13:25:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:20.161 13:25:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:20.161 13:25:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:20.161 13:25:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:20.162 13:25:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.162 13:25:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:20.162 13:25:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.162 13:25:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:20.162 "name": "raid_bdev1", 00:15:20.162 "uuid": "18887443-29b6-4124-891f-31e03cbb7edd", 00:15:20.162 "strip_size_kb": 64, 00:15:20.162 "state": "online", 00:15:20.162 "raid_level": "raid5f", 00:15:20.162 "superblock": true, 00:15:20.162 "num_base_bdevs": 3, 00:15:20.162 "num_base_bdevs_discovered": 3, 00:15:20.162 "num_base_bdevs_operational": 3, 00:15:20.162 "process": { 00:15:20.162 "type": "rebuild", 00:15:20.162 "target": "spare", 00:15:20.162 "progress": { 00:15:20.162 "blocks": 114688, 00:15:20.162 "percent": 90 00:15:20.162 } 00:15:20.162 }, 00:15:20.162 "base_bdevs_list": [ 00:15:20.162 { 00:15:20.162 "name": "spare", 00:15:20.162 "uuid": "c34dcaa6-95c3-5170-9ce3-30bf0a43d325", 00:15:20.162 "is_configured": true, 00:15:20.162 "data_offset": 2048, 00:15:20.162 "data_size": 63488 00:15:20.162 }, 00:15:20.162 { 00:15:20.162 "name": "BaseBdev2", 00:15:20.162 "uuid": "14faa14d-5105-5f59-a622-012b8353326c", 00:15:20.162 "is_configured": true, 00:15:20.162 "data_offset": 2048, 00:15:20.162 "data_size": 63488 00:15:20.162 }, 00:15:20.162 { 00:15:20.162 "name": "BaseBdev3", 00:15:20.163 "uuid": "b43f4a3d-51a1-5768-9c82-2a050b26d8e3", 00:15:20.163 "is_configured": true, 00:15:20.163 "data_offset": 2048, 00:15:20.163 "data_size": 63488 00:15:20.163 } 00:15:20.163 ] 00:15:20.163 }' 00:15:20.163 13:25:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:20.163 13:25:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:20.163 13:25:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:20.163 13:25:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:20.163 13:25:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:20.737 [2024-11-17 13:25:09.763669] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:20.737 [2024-11-17 13:25:09.763883] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:20.737 [2024-11-17 13:25:09.764090] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:21.307 13:25:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:21.307 13:25:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:21.307 13:25:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:21.307 13:25:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:21.307 13:25:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:21.307 13:25:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:21.307 13:25:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:21.307 13:25:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:21.307 13:25:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.307 13:25:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:21.307 13:25:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.307 13:25:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:21.307 "name": "raid_bdev1", 00:15:21.307 "uuid": "18887443-29b6-4124-891f-31e03cbb7edd", 00:15:21.307 "strip_size_kb": 64, 00:15:21.307 "state": "online", 00:15:21.307 "raid_level": "raid5f", 00:15:21.307 "superblock": true, 00:15:21.307 "num_base_bdevs": 3, 00:15:21.307 "num_base_bdevs_discovered": 3, 00:15:21.307 "num_base_bdevs_operational": 3, 00:15:21.307 "base_bdevs_list": [ 00:15:21.307 { 00:15:21.307 "name": "spare", 00:15:21.307 "uuid": "c34dcaa6-95c3-5170-9ce3-30bf0a43d325", 00:15:21.307 "is_configured": true, 00:15:21.307 "data_offset": 2048, 00:15:21.307 "data_size": 63488 00:15:21.307 }, 00:15:21.307 { 00:15:21.307 "name": "BaseBdev2", 00:15:21.307 "uuid": "14faa14d-5105-5f59-a622-012b8353326c", 00:15:21.307 "is_configured": true, 00:15:21.307 "data_offset": 2048, 00:15:21.307 "data_size": 63488 00:15:21.307 }, 00:15:21.307 { 00:15:21.307 "name": "BaseBdev3", 00:15:21.307 "uuid": "b43f4a3d-51a1-5768-9c82-2a050b26d8e3", 00:15:21.307 "is_configured": true, 00:15:21.307 "data_offset": 2048, 00:15:21.307 "data_size": 63488 00:15:21.307 } 00:15:21.307 ] 00:15:21.307 }' 00:15:21.307 13:25:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:21.307 13:25:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:21.307 13:25:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:21.567 13:25:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:21.567 13:25:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:15:21.567 13:25:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:21.567 13:25:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:21.567 13:25:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:21.567 13:25:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:21.567 13:25:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:21.567 13:25:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:21.567 13:25:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:21.567 13:25:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.567 13:25:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:21.567 13:25:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.567 13:25:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:21.567 "name": "raid_bdev1", 00:15:21.567 "uuid": "18887443-29b6-4124-891f-31e03cbb7edd", 00:15:21.567 "strip_size_kb": 64, 00:15:21.567 "state": "online", 00:15:21.567 "raid_level": "raid5f", 00:15:21.567 "superblock": true, 00:15:21.567 "num_base_bdevs": 3, 00:15:21.567 "num_base_bdevs_discovered": 3, 00:15:21.567 "num_base_bdevs_operational": 3, 00:15:21.567 "base_bdevs_list": [ 00:15:21.567 { 00:15:21.567 "name": "spare", 00:15:21.567 "uuid": "c34dcaa6-95c3-5170-9ce3-30bf0a43d325", 00:15:21.567 "is_configured": true, 00:15:21.567 "data_offset": 2048, 00:15:21.567 "data_size": 63488 00:15:21.567 }, 00:15:21.567 { 00:15:21.567 "name": "BaseBdev2", 00:15:21.567 "uuid": "14faa14d-5105-5f59-a622-012b8353326c", 00:15:21.567 "is_configured": true, 00:15:21.567 "data_offset": 2048, 00:15:21.567 "data_size": 63488 00:15:21.567 }, 00:15:21.567 { 00:15:21.567 "name": "BaseBdev3", 00:15:21.567 "uuid": "b43f4a3d-51a1-5768-9c82-2a050b26d8e3", 00:15:21.567 "is_configured": true, 00:15:21.567 "data_offset": 2048, 00:15:21.567 "data_size": 63488 00:15:21.567 } 00:15:21.567 ] 00:15:21.567 }' 00:15:21.567 13:25:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:21.567 13:25:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:21.567 13:25:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:21.567 13:25:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:21.567 13:25:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:21.567 13:25:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:21.567 13:25:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:21.567 13:25:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:21.567 13:25:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:21.567 13:25:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:21.567 13:25:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:21.567 13:25:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:21.567 13:25:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:21.567 13:25:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:21.567 13:25:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:21.567 13:25:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.567 13:25:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:21.567 13:25:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:21.567 13:25:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.567 13:25:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:21.567 "name": "raid_bdev1", 00:15:21.567 "uuid": "18887443-29b6-4124-891f-31e03cbb7edd", 00:15:21.567 "strip_size_kb": 64, 00:15:21.567 "state": "online", 00:15:21.567 "raid_level": "raid5f", 00:15:21.568 "superblock": true, 00:15:21.568 "num_base_bdevs": 3, 00:15:21.568 "num_base_bdevs_discovered": 3, 00:15:21.568 "num_base_bdevs_operational": 3, 00:15:21.568 "base_bdevs_list": [ 00:15:21.568 { 00:15:21.568 "name": "spare", 00:15:21.568 "uuid": "c34dcaa6-95c3-5170-9ce3-30bf0a43d325", 00:15:21.568 "is_configured": true, 00:15:21.568 "data_offset": 2048, 00:15:21.568 "data_size": 63488 00:15:21.568 }, 00:15:21.568 { 00:15:21.568 "name": "BaseBdev2", 00:15:21.568 "uuid": "14faa14d-5105-5f59-a622-012b8353326c", 00:15:21.568 "is_configured": true, 00:15:21.568 "data_offset": 2048, 00:15:21.568 "data_size": 63488 00:15:21.568 }, 00:15:21.568 { 00:15:21.568 "name": "BaseBdev3", 00:15:21.568 "uuid": "b43f4a3d-51a1-5768-9c82-2a050b26d8e3", 00:15:21.568 "is_configured": true, 00:15:21.568 "data_offset": 2048, 00:15:21.568 "data_size": 63488 00:15:21.568 } 00:15:21.568 ] 00:15:21.568 }' 00:15:21.568 13:25:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:21.568 13:25:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.137 13:25:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:22.137 13:25:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.137 13:25:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.137 [2024-11-17 13:25:11.129598] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:22.137 [2024-11-17 13:25:11.129632] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:22.137 [2024-11-17 13:25:11.129745] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:22.137 [2024-11-17 13:25:11.129837] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:22.137 [2024-11-17 13:25:11.129855] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:22.137 13:25:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.137 13:25:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:22.137 13:25:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:15:22.137 13:25:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.137 13:25:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.137 13:25:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.137 13:25:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:22.137 13:25:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:22.137 13:25:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:15:22.137 13:25:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:15:22.137 13:25:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:22.137 13:25:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:15:22.137 13:25:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:22.137 13:25:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:22.137 13:25:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:22.137 13:25:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:15:22.137 13:25:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:22.137 13:25:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:22.137 13:25:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:15:22.396 /dev/nbd0 00:15:22.396 13:25:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:22.396 13:25:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:22.396 13:25:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:22.396 13:25:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:15:22.396 13:25:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:22.396 13:25:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:22.396 13:25:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:22.396 13:25:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:15:22.396 13:25:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:22.396 13:25:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:22.396 13:25:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:22.396 1+0 records in 00:15:22.396 1+0 records out 00:15:22.396 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000512009 s, 8.0 MB/s 00:15:22.396 13:25:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:22.396 13:25:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:15:22.396 13:25:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:22.396 13:25:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:22.396 13:25:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:15:22.396 13:25:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:22.396 13:25:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:22.396 13:25:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:15:22.655 /dev/nbd1 00:15:22.655 13:25:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:22.655 13:25:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:22.655 13:25:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:22.655 13:25:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:15:22.655 13:25:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:22.655 13:25:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:22.655 13:25:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:22.655 13:25:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:15:22.655 13:25:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:22.655 13:25:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:22.655 13:25:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:22.655 1+0 records in 00:15:22.655 1+0 records out 00:15:22.655 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000404249 s, 10.1 MB/s 00:15:22.655 13:25:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:22.655 13:25:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:15:22.655 13:25:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:22.655 13:25:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:22.655 13:25:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:15:22.655 13:25:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:22.655 13:25:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:22.655 13:25:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:15:22.655 13:25:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:15:22.655 13:25:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:22.655 13:25:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:22.655 13:25:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:22.655 13:25:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:15:22.655 13:25:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:22.655 13:25:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:22.969 13:25:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:22.969 13:25:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:22.969 13:25:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:22.969 13:25:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:22.969 13:25:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:22.969 13:25:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:22.969 13:25:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:22.969 13:25:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:22.969 13:25:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:22.970 13:25:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:23.253 13:25:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:23.253 13:25:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:23.253 13:25:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:23.253 13:25:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:23.253 13:25:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:23.253 13:25:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:23.253 13:25:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:23.253 13:25:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:23.253 13:25:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:15:23.253 13:25:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:15:23.253 13:25:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.253 13:25:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:23.253 13:25:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.253 13:25:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:23.253 13:25:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.253 13:25:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:23.253 [2024-11-17 13:25:12.328003] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:23.253 [2024-11-17 13:25:12.328092] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:23.253 [2024-11-17 13:25:12.328120] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:15:23.253 [2024-11-17 13:25:12.328132] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:23.253 [2024-11-17 13:25:12.330744] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:23.253 spare 00:15:23.253 [2024-11-17 13:25:12.330883] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:23.253 [2024-11-17 13:25:12.331014] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:23.253 [2024-11-17 13:25:12.331098] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:23.253 [2024-11-17 13:25:12.331281] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:23.253 [2024-11-17 13:25:12.331395] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:23.253 13:25:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.253 13:25:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:15:23.253 13:25:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.253 13:25:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:23.253 [2024-11-17 13:25:12.431309] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:15:23.253 [2024-11-17 13:25:12.431368] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:23.253 [2024-11-17 13:25:12.431713] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000047700 00:15:23.253 [2024-11-17 13:25:12.438724] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:15:23.253 [2024-11-17 13:25:12.438752] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:15:23.253 [2024-11-17 13:25:12.439022] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:23.253 13:25:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.253 13:25:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:23.253 13:25:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:23.253 13:25:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:23.253 13:25:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:23.253 13:25:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:23.253 13:25:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:23.253 13:25:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:23.253 13:25:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:23.253 13:25:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:23.253 13:25:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:23.253 13:25:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:23.253 13:25:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:23.253 13:25:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.253 13:25:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:23.253 13:25:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.512 13:25:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:23.512 "name": "raid_bdev1", 00:15:23.512 "uuid": "18887443-29b6-4124-891f-31e03cbb7edd", 00:15:23.512 "strip_size_kb": 64, 00:15:23.512 "state": "online", 00:15:23.512 "raid_level": "raid5f", 00:15:23.512 "superblock": true, 00:15:23.512 "num_base_bdevs": 3, 00:15:23.512 "num_base_bdevs_discovered": 3, 00:15:23.512 "num_base_bdevs_operational": 3, 00:15:23.512 "base_bdevs_list": [ 00:15:23.512 { 00:15:23.512 "name": "spare", 00:15:23.512 "uuid": "c34dcaa6-95c3-5170-9ce3-30bf0a43d325", 00:15:23.512 "is_configured": true, 00:15:23.512 "data_offset": 2048, 00:15:23.512 "data_size": 63488 00:15:23.512 }, 00:15:23.512 { 00:15:23.512 "name": "BaseBdev2", 00:15:23.512 "uuid": "14faa14d-5105-5f59-a622-012b8353326c", 00:15:23.512 "is_configured": true, 00:15:23.512 "data_offset": 2048, 00:15:23.512 "data_size": 63488 00:15:23.512 }, 00:15:23.512 { 00:15:23.512 "name": "BaseBdev3", 00:15:23.512 "uuid": "b43f4a3d-51a1-5768-9c82-2a050b26d8e3", 00:15:23.512 "is_configured": true, 00:15:23.512 "data_offset": 2048, 00:15:23.512 "data_size": 63488 00:15:23.512 } 00:15:23.512 ] 00:15:23.512 }' 00:15:23.512 13:25:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:23.512 13:25:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:23.770 13:25:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:23.771 13:25:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:23.771 13:25:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:23.771 13:25:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:23.771 13:25:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:23.771 13:25:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:23.771 13:25:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:23.771 13:25:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.771 13:25:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:23.771 13:25:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.771 13:25:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:23.771 "name": "raid_bdev1", 00:15:23.771 "uuid": "18887443-29b6-4124-891f-31e03cbb7edd", 00:15:23.771 "strip_size_kb": 64, 00:15:23.771 "state": "online", 00:15:23.771 "raid_level": "raid5f", 00:15:23.771 "superblock": true, 00:15:23.771 "num_base_bdevs": 3, 00:15:23.771 "num_base_bdevs_discovered": 3, 00:15:23.771 "num_base_bdevs_operational": 3, 00:15:23.771 "base_bdevs_list": [ 00:15:23.771 { 00:15:23.771 "name": "spare", 00:15:23.771 "uuid": "c34dcaa6-95c3-5170-9ce3-30bf0a43d325", 00:15:23.771 "is_configured": true, 00:15:23.771 "data_offset": 2048, 00:15:23.771 "data_size": 63488 00:15:23.771 }, 00:15:23.771 { 00:15:23.771 "name": "BaseBdev2", 00:15:23.771 "uuid": "14faa14d-5105-5f59-a622-012b8353326c", 00:15:23.771 "is_configured": true, 00:15:23.771 "data_offset": 2048, 00:15:23.771 "data_size": 63488 00:15:23.771 }, 00:15:23.771 { 00:15:23.771 "name": "BaseBdev3", 00:15:23.771 "uuid": "b43f4a3d-51a1-5768-9c82-2a050b26d8e3", 00:15:23.771 "is_configured": true, 00:15:23.771 "data_offset": 2048, 00:15:23.771 "data_size": 63488 00:15:23.771 } 00:15:23.771 ] 00:15:23.771 }' 00:15:23.771 13:25:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:23.771 13:25:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:23.771 13:25:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:24.029 13:25:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:24.029 13:25:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:15:24.029 13:25:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:24.029 13:25:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.029 13:25:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:24.029 13:25:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.029 13:25:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:15:24.029 13:25:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:24.029 13:25:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.029 13:25:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:24.029 [2024-11-17 13:25:13.045917] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:24.029 13:25:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.029 13:25:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:24.029 13:25:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:24.029 13:25:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:24.029 13:25:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:24.029 13:25:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:24.029 13:25:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:24.029 13:25:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:24.029 13:25:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:24.029 13:25:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:24.029 13:25:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:24.030 13:25:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:24.030 13:25:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:24.030 13:25:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.030 13:25:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:24.030 13:25:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.030 13:25:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:24.030 "name": "raid_bdev1", 00:15:24.030 "uuid": "18887443-29b6-4124-891f-31e03cbb7edd", 00:15:24.030 "strip_size_kb": 64, 00:15:24.030 "state": "online", 00:15:24.030 "raid_level": "raid5f", 00:15:24.030 "superblock": true, 00:15:24.030 "num_base_bdevs": 3, 00:15:24.030 "num_base_bdevs_discovered": 2, 00:15:24.030 "num_base_bdevs_operational": 2, 00:15:24.030 "base_bdevs_list": [ 00:15:24.030 { 00:15:24.030 "name": null, 00:15:24.030 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:24.030 "is_configured": false, 00:15:24.030 "data_offset": 0, 00:15:24.030 "data_size": 63488 00:15:24.030 }, 00:15:24.030 { 00:15:24.030 "name": "BaseBdev2", 00:15:24.030 "uuid": "14faa14d-5105-5f59-a622-012b8353326c", 00:15:24.030 "is_configured": true, 00:15:24.030 "data_offset": 2048, 00:15:24.030 "data_size": 63488 00:15:24.030 }, 00:15:24.030 { 00:15:24.030 "name": "BaseBdev3", 00:15:24.030 "uuid": "b43f4a3d-51a1-5768-9c82-2a050b26d8e3", 00:15:24.030 "is_configured": true, 00:15:24.030 "data_offset": 2048, 00:15:24.030 "data_size": 63488 00:15:24.030 } 00:15:24.030 ] 00:15:24.030 }' 00:15:24.030 13:25:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:24.030 13:25:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:24.289 13:25:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:24.289 13:25:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.289 13:25:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:24.289 [2024-11-17 13:25:13.513194] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:24.289 [2024-11-17 13:25:13.513432] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:15:24.289 [2024-11-17 13:25:13.513451] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:24.289 [2024-11-17 13:25:13.513494] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:24.548 [2024-11-17 13:25:13.531855] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000477d0 00:15:24.548 13:25:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.548 13:25:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:15:24.548 [2024-11-17 13:25:13.540862] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:25.485 13:25:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:25.485 13:25:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:25.485 13:25:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:25.485 13:25:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:25.485 13:25:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:25.485 13:25:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:25.485 13:25:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.485 13:25:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:25.485 13:25:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.485 13:25:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.485 13:25:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:25.485 "name": "raid_bdev1", 00:15:25.485 "uuid": "18887443-29b6-4124-891f-31e03cbb7edd", 00:15:25.485 "strip_size_kb": 64, 00:15:25.485 "state": "online", 00:15:25.485 "raid_level": "raid5f", 00:15:25.485 "superblock": true, 00:15:25.485 "num_base_bdevs": 3, 00:15:25.485 "num_base_bdevs_discovered": 3, 00:15:25.485 "num_base_bdevs_operational": 3, 00:15:25.485 "process": { 00:15:25.485 "type": "rebuild", 00:15:25.485 "target": "spare", 00:15:25.485 "progress": { 00:15:25.485 "blocks": 20480, 00:15:25.485 "percent": 16 00:15:25.485 } 00:15:25.485 }, 00:15:25.485 "base_bdevs_list": [ 00:15:25.485 { 00:15:25.485 "name": "spare", 00:15:25.485 "uuid": "c34dcaa6-95c3-5170-9ce3-30bf0a43d325", 00:15:25.485 "is_configured": true, 00:15:25.485 "data_offset": 2048, 00:15:25.485 "data_size": 63488 00:15:25.485 }, 00:15:25.485 { 00:15:25.485 "name": "BaseBdev2", 00:15:25.485 "uuid": "14faa14d-5105-5f59-a622-012b8353326c", 00:15:25.485 "is_configured": true, 00:15:25.485 "data_offset": 2048, 00:15:25.485 "data_size": 63488 00:15:25.485 }, 00:15:25.485 { 00:15:25.485 "name": "BaseBdev3", 00:15:25.485 "uuid": "b43f4a3d-51a1-5768-9c82-2a050b26d8e3", 00:15:25.485 "is_configured": true, 00:15:25.485 "data_offset": 2048, 00:15:25.485 "data_size": 63488 00:15:25.485 } 00:15:25.485 ] 00:15:25.485 }' 00:15:25.485 13:25:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:25.485 13:25:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:25.485 13:25:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:25.485 13:25:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:25.485 13:25:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:15:25.485 13:25:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.485 13:25:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.485 [2024-11-17 13:25:14.692182] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:25.745 [2024-11-17 13:25:14.751996] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:25.745 [2024-11-17 13:25:14.752064] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:25.745 [2024-11-17 13:25:14.752079] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:25.745 [2024-11-17 13:25:14.752089] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:25.745 13:25:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.745 13:25:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:25.745 13:25:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:25.745 13:25:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:25.745 13:25:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:25.745 13:25:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:25.745 13:25:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:25.745 13:25:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:25.745 13:25:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:25.745 13:25:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:25.745 13:25:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:25.745 13:25:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:25.745 13:25:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:25.745 13:25:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.745 13:25:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.745 13:25:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.745 13:25:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:25.745 "name": "raid_bdev1", 00:15:25.745 "uuid": "18887443-29b6-4124-891f-31e03cbb7edd", 00:15:25.745 "strip_size_kb": 64, 00:15:25.745 "state": "online", 00:15:25.745 "raid_level": "raid5f", 00:15:25.745 "superblock": true, 00:15:25.745 "num_base_bdevs": 3, 00:15:25.745 "num_base_bdevs_discovered": 2, 00:15:25.745 "num_base_bdevs_operational": 2, 00:15:25.745 "base_bdevs_list": [ 00:15:25.745 { 00:15:25.745 "name": null, 00:15:25.745 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:25.745 "is_configured": false, 00:15:25.746 "data_offset": 0, 00:15:25.746 "data_size": 63488 00:15:25.746 }, 00:15:25.746 { 00:15:25.746 "name": "BaseBdev2", 00:15:25.746 "uuid": "14faa14d-5105-5f59-a622-012b8353326c", 00:15:25.746 "is_configured": true, 00:15:25.746 "data_offset": 2048, 00:15:25.746 "data_size": 63488 00:15:25.746 }, 00:15:25.746 { 00:15:25.746 "name": "BaseBdev3", 00:15:25.746 "uuid": "b43f4a3d-51a1-5768-9c82-2a050b26d8e3", 00:15:25.746 "is_configured": true, 00:15:25.746 "data_offset": 2048, 00:15:25.746 "data_size": 63488 00:15:25.746 } 00:15:25.746 ] 00:15:25.746 }' 00:15:25.746 13:25:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:25.746 13:25:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.004 13:25:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:26.004 13:25:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.004 13:25:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.004 [2024-11-17 13:25:15.226246] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:26.004 [2024-11-17 13:25:15.226323] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:26.004 [2024-11-17 13:25:15.226346] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:15:26.004 [2024-11-17 13:25:15.226362] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:26.004 [2024-11-17 13:25:15.226872] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:26.004 [2024-11-17 13:25:15.226907] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:26.004 [2024-11-17 13:25:15.226999] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:26.004 [2024-11-17 13:25:15.227014] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:15:26.004 [2024-11-17 13:25:15.227024] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:26.004 [2024-11-17 13:25:15.227049] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:26.263 [2024-11-17 13:25:15.242156] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000478a0 00:15:26.263 spare 00:15:26.263 13:25:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.263 13:25:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:15:26.263 [2024-11-17 13:25:15.250143] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:27.203 13:25:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:27.203 13:25:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:27.203 13:25:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:27.203 13:25:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:27.203 13:25:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:27.203 13:25:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.203 13:25:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.203 13:25:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.203 13:25:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:27.203 13:25:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.203 13:25:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:27.203 "name": "raid_bdev1", 00:15:27.203 "uuid": "18887443-29b6-4124-891f-31e03cbb7edd", 00:15:27.203 "strip_size_kb": 64, 00:15:27.203 "state": "online", 00:15:27.203 "raid_level": "raid5f", 00:15:27.203 "superblock": true, 00:15:27.203 "num_base_bdevs": 3, 00:15:27.203 "num_base_bdevs_discovered": 3, 00:15:27.203 "num_base_bdevs_operational": 3, 00:15:27.203 "process": { 00:15:27.203 "type": "rebuild", 00:15:27.203 "target": "spare", 00:15:27.203 "progress": { 00:15:27.203 "blocks": 18432, 00:15:27.203 "percent": 14 00:15:27.203 } 00:15:27.203 }, 00:15:27.203 "base_bdevs_list": [ 00:15:27.203 { 00:15:27.203 "name": "spare", 00:15:27.203 "uuid": "c34dcaa6-95c3-5170-9ce3-30bf0a43d325", 00:15:27.203 "is_configured": true, 00:15:27.203 "data_offset": 2048, 00:15:27.203 "data_size": 63488 00:15:27.203 }, 00:15:27.203 { 00:15:27.203 "name": "BaseBdev2", 00:15:27.203 "uuid": "14faa14d-5105-5f59-a622-012b8353326c", 00:15:27.203 "is_configured": true, 00:15:27.203 "data_offset": 2048, 00:15:27.203 "data_size": 63488 00:15:27.203 }, 00:15:27.203 { 00:15:27.203 "name": "BaseBdev3", 00:15:27.203 "uuid": "b43f4a3d-51a1-5768-9c82-2a050b26d8e3", 00:15:27.203 "is_configured": true, 00:15:27.203 "data_offset": 2048, 00:15:27.203 "data_size": 63488 00:15:27.203 } 00:15:27.203 ] 00:15:27.203 }' 00:15:27.203 13:25:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:27.203 13:25:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:27.203 13:25:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:27.203 13:25:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:27.203 13:25:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:15:27.203 13:25:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.204 13:25:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.204 [2024-11-17 13:25:16.382321] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:27.464 [2024-11-17 13:25:16.461499] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:27.464 [2024-11-17 13:25:16.461684] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:27.464 [2024-11-17 13:25:16.461709] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:27.464 [2024-11-17 13:25:16.461718] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:27.464 13:25:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.464 13:25:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:27.464 13:25:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:27.464 13:25:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:27.464 13:25:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:27.464 13:25:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:27.464 13:25:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:27.464 13:25:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:27.464 13:25:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:27.464 13:25:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:27.464 13:25:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:27.464 13:25:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.464 13:25:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.464 13:25:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:27.464 13:25:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.464 13:25:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.464 13:25:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:27.464 "name": "raid_bdev1", 00:15:27.464 "uuid": "18887443-29b6-4124-891f-31e03cbb7edd", 00:15:27.464 "strip_size_kb": 64, 00:15:27.464 "state": "online", 00:15:27.464 "raid_level": "raid5f", 00:15:27.464 "superblock": true, 00:15:27.464 "num_base_bdevs": 3, 00:15:27.464 "num_base_bdevs_discovered": 2, 00:15:27.464 "num_base_bdevs_operational": 2, 00:15:27.464 "base_bdevs_list": [ 00:15:27.464 { 00:15:27.464 "name": null, 00:15:27.464 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:27.464 "is_configured": false, 00:15:27.464 "data_offset": 0, 00:15:27.464 "data_size": 63488 00:15:27.464 }, 00:15:27.464 { 00:15:27.464 "name": "BaseBdev2", 00:15:27.464 "uuid": "14faa14d-5105-5f59-a622-012b8353326c", 00:15:27.464 "is_configured": true, 00:15:27.464 "data_offset": 2048, 00:15:27.464 "data_size": 63488 00:15:27.464 }, 00:15:27.464 { 00:15:27.464 "name": "BaseBdev3", 00:15:27.464 "uuid": "b43f4a3d-51a1-5768-9c82-2a050b26d8e3", 00:15:27.464 "is_configured": true, 00:15:27.464 "data_offset": 2048, 00:15:27.464 "data_size": 63488 00:15:27.464 } 00:15:27.464 ] 00:15:27.464 }' 00:15:27.464 13:25:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:27.464 13:25:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.724 13:25:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:27.724 13:25:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:27.724 13:25:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:27.724 13:25:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:27.724 13:25:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:27.724 13:25:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.724 13:25:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:27.724 13:25:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.724 13:25:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.724 13:25:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.983 13:25:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:27.983 "name": "raid_bdev1", 00:15:27.984 "uuid": "18887443-29b6-4124-891f-31e03cbb7edd", 00:15:27.984 "strip_size_kb": 64, 00:15:27.984 "state": "online", 00:15:27.984 "raid_level": "raid5f", 00:15:27.984 "superblock": true, 00:15:27.984 "num_base_bdevs": 3, 00:15:27.984 "num_base_bdevs_discovered": 2, 00:15:27.984 "num_base_bdevs_operational": 2, 00:15:27.984 "base_bdevs_list": [ 00:15:27.984 { 00:15:27.984 "name": null, 00:15:27.984 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:27.984 "is_configured": false, 00:15:27.984 "data_offset": 0, 00:15:27.984 "data_size": 63488 00:15:27.984 }, 00:15:27.984 { 00:15:27.984 "name": "BaseBdev2", 00:15:27.984 "uuid": "14faa14d-5105-5f59-a622-012b8353326c", 00:15:27.984 "is_configured": true, 00:15:27.984 "data_offset": 2048, 00:15:27.984 "data_size": 63488 00:15:27.984 }, 00:15:27.984 { 00:15:27.984 "name": "BaseBdev3", 00:15:27.984 "uuid": "b43f4a3d-51a1-5768-9c82-2a050b26d8e3", 00:15:27.984 "is_configured": true, 00:15:27.984 "data_offset": 2048, 00:15:27.984 "data_size": 63488 00:15:27.984 } 00:15:27.984 ] 00:15:27.984 }' 00:15:27.984 13:25:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:27.984 13:25:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:27.984 13:25:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:27.984 13:25:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:27.984 13:25:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:15:27.984 13:25:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.984 13:25:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.984 13:25:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.984 13:25:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:27.984 13:25:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.984 13:25:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.984 [2024-11-17 13:25:17.079375] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:27.984 [2024-11-17 13:25:17.079456] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:27.984 [2024-11-17 13:25:17.079483] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:15:27.984 [2024-11-17 13:25:17.079493] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:27.984 [2024-11-17 13:25:17.079997] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:27.984 [2024-11-17 13:25:17.080018] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:27.984 [2024-11-17 13:25:17.080116] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:15:27.984 [2024-11-17 13:25:17.080138] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:15:27.984 [2024-11-17 13:25:17.080161] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:27.984 [2024-11-17 13:25:17.080173] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:15:27.984 BaseBdev1 00:15:27.984 13:25:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.984 13:25:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:15:28.921 13:25:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:28.921 13:25:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:28.921 13:25:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:28.921 13:25:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:28.921 13:25:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:28.921 13:25:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:28.921 13:25:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:28.921 13:25:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:28.921 13:25:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:28.921 13:25:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:28.921 13:25:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:28.921 13:25:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.921 13:25:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:28.921 13:25:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.921 13:25:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.921 13:25:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:28.921 "name": "raid_bdev1", 00:15:28.921 "uuid": "18887443-29b6-4124-891f-31e03cbb7edd", 00:15:28.921 "strip_size_kb": 64, 00:15:28.921 "state": "online", 00:15:28.921 "raid_level": "raid5f", 00:15:28.921 "superblock": true, 00:15:28.921 "num_base_bdevs": 3, 00:15:28.921 "num_base_bdevs_discovered": 2, 00:15:28.921 "num_base_bdevs_operational": 2, 00:15:28.921 "base_bdevs_list": [ 00:15:28.921 { 00:15:28.921 "name": null, 00:15:28.921 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:28.921 "is_configured": false, 00:15:28.921 "data_offset": 0, 00:15:28.921 "data_size": 63488 00:15:28.921 }, 00:15:28.921 { 00:15:28.921 "name": "BaseBdev2", 00:15:28.921 "uuid": "14faa14d-5105-5f59-a622-012b8353326c", 00:15:28.921 "is_configured": true, 00:15:28.921 "data_offset": 2048, 00:15:28.921 "data_size": 63488 00:15:28.921 }, 00:15:28.921 { 00:15:28.921 "name": "BaseBdev3", 00:15:28.921 "uuid": "b43f4a3d-51a1-5768-9c82-2a050b26d8e3", 00:15:28.922 "is_configured": true, 00:15:28.922 "data_offset": 2048, 00:15:28.922 "data_size": 63488 00:15:28.922 } 00:15:28.922 ] 00:15:28.922 }' 00:15:28.922 13:25:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:28.922 13:25:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.491 13:25:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:29.491 13:25:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:29.491 13:25:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:29.491 13:25:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:29.491 13:25:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:29.491 13:25:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.491 13:25:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:29.491 13:25:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.491 13:25:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.491 13:25:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.491 13:25:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:29.491 "name": "raid_bdev1", 00:15:29.491 "uuid": "18887443-29b6-4124-891f-31e03cbb7edd", 00:15:29.491 "strip_size_kb": 64, 00:15:29.491 "state": "online", 00:15:29.491 "raid_level": "raid5f", 00:15:29.491 "superblock": true, 00:15:29.491 "num_base_bdevs": 3, 00:15:29.491 "num_base_bdevs_discovered": 2, 00:15:29.491 "num_base_bdevs_operational": 2, 00:15:29.491 "base_bdevs_list": [ 00:15:29.491 { 00:15:29.491 "name": null, 00:15:29.491 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:29.491 "is_configured": false, 00:15:29.491 "data_offset": 0, 00:15:29.491 "data_size": 63488 00:15:29.491 }, 00:15:29.491 { 00:15:29.491 "name": "BaseBdev2", 00:15:29.491 "uuid": "14faa14d-5105-5f59-a622-012b8353326c", 00:15:29.491 "is_configured": true, 00:15:29.491 "data_offset": 2048, 00:15:29.491 "data_size": 63488 00:15:29.491 }, 00:15:29.491 { 00:15:29.491 "name": "BaseBdev3", 00:15:29.491 "uuid": "b43f4a3d-51a1-5768-9c82-2a050b26d8e3", 00:15:29.491 "is_configured": true, 00:15:29.491 "data_offset": 2048, 00:15:29.491 "data_size": 63488 00:15:29.491 } 00:15:29.491 ] 00:15:29.491 }' 00:15:29.491 13:25:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:29.491 13:25:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:29.491 13:25:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:29.491 13:25:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:29.491 13:25:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:29.491 13:25:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:15:29.491 13:25:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:29.491 13:25:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:15:29.491 13:25:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:29.491 13:25:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:15:29.491 13:25:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:29.491 13:25:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:29.491 13:25:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.491 13:25:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.491 [2024-11-17 13:25:18.661041] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:29.491 [2024-11-17 13:25:18.661323] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:15:29.491 [2024-11-17 13:25:18.661345] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:29.491 request: 00:15:29.491 { 00:15:29.491 "base_bdev": "BaseBdev1", 00:15:29.491 "raid_bdev": "raid_bdev1", 00:15:29.491 "method": "bdev_raid_add_base_bdev", 00:15:29.491 "req_id": 1 00:15:29.491 } 00:15:29.491 Got JSON-RPC error response 00:15:29.491 response: 00:15:29.491 { 00:15:29.491 "code": -22, 00:15:29.491 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:15:29.491 } 00:15:29.491 13:25:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:15:29.491 13:25:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:15:29.491 13:25:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:29.491 13:25:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:29.491 13:25:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:29.491 13:25:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:15:30.871 13:25:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:30.871 13:25:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:30.871 13:25:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:30.871 13:25:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:30.871 13:25:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:30.871 13:25:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:30.871 13:25:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:30.871 13:25:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:30.871 13:25:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:30.871 13:25:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:30.871 13:25:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:30.871 13:25:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:30.871 13:25:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.871 13:25:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:30.871 13:25:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.871 13:25:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:30.871 "name": "raid_bdev1", 00:15:30.871 "uuid": "18887443-29b6-4124-891f-31e03cbb7edd", 00:15:30.871 "strip_size_kb": 64, 00:15:30.871 "state": "online", 00:15:30.871 "raid_level": "raid5f", 00:15:30.871 "superblock": true, 00:15:30.871 "num_base_bdevs": 3, 00:15:30.871 "num_base_bdevs_discovered": 2, 00:15:30.871 "num_base_bdevs_operational": 2, 00:15:30.871 "base_bdevs_list": [ 00:15:30.871 { 00:15:30.871 "name": null, 00:15:30.871 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:30.871 "is_configured": false, 00:15:30.871 "data_offset": 0, 00:15:30.871 "data_size": 63488 00:15:30.871 }, 00:15:30.871 { 00:15:30.871 "name": "BaseBdev2", 00:15:30.871 "uuid": "14faa14d-5105-5f59-a622-012b8353326c", 00:15:30.871 "is_configured": true, 00:15:30.871 "data_offset": 2048, 00:15:30.871 "data_size": 63488 00:15:30.871 }, 00:15:30.871 { 00:15:30.871 "name": "BaseBdev3", 00:15:30.871 "uuid": "b43f4a3d-51a1-5768-9c82-2a050b26d8e3", 00:15:30.871 "is_configured": true, 00:15:30.871 "data_offset": 2048, 00:15:30.871 "data_size": 63488 00:15:30.871 } 00:15:30.871 ] 00:15:30.871 }' 00:15:30.871 13:25:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:30.871 13:25:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.132 13:25:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:31.132 13:25:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:31.132 13:25:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:31.132 13:25:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:31.132 13:25:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:31.132 13:25:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:31.132 13:25:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:31.132 13:25:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.132 13:25:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.132 13:25:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.132 13:25:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:31.132 "name": "raid_bdev1", 00:15:31.132 "uuid": "18887443-29b6-4124-891f-31e03cbb7edd", 00:15:31.132 "strip_size_kb": 64, 00:15:31.132 "state": "online", 00:15:31.132 "raid_level": "raid5f", 00:15:31.132 "superblock": true, 00:15:31.132 "num_base_bdevs": 3, 00:15:31.132 "num_base_bdevs_discovered": 2, 00:15:31.132 "num_base_bdevs_operational": 2, 00:15:31.132 "base_bdevs_list": [ 00:15:31.132 { 00:15:31.132 "name": null, 00:15:31.132 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:31.132 "is_configured": false, 00:15:31.132 "data_offset": 0, 00:15:31.132 "data_size": 63488 00:15:31.132 }, 00:15:31.132 { 00:15:31.132 "name": "BaseBdev2", 00:15:31.132 "uuid": "14faa14d-5105-5f59-a622-012b8353326c", 00:15:31.132 "is_configured": true, 00:15:31.132 "data_offset": 2048, 00:15:31.132 "data_size": 63488 00:15:31.132 }, 00:15:31.132 { 00:15:31.132 "name": "BaseBdev3", 00:15:31.132 "uuid": "b43f4a3d-51a1-5768-9c82-2a050b26d8e3", 00:15:31.132 "is_configured": true, 00:15:31.132 "data_offset": 2048, 00:15:31.132 "data_size": 63488 00:15:31.132 } 00:15:31.132 ] 00:15:31.132 }' 00:15:31.132 13:25:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:31.132 13:25:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:31.132 13:25:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:31.132 13:25:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:31.132 13:25:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 81885 00:15:31.132 13:25:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 81885 ']' 00:15:31.132 13:25:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 81885 00:15:31.132 13:25:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:15:31.132 13:25:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:31.132 13:25:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81885 00:15:31.132 13:25:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:31.132 killing process with pid 81885 00:15:31.132 Received shutdown signal, test time was about 60.000000 seconds 00:15:31.132 00:15:31.132 Latency(us) 00:15:31.132 [2024-11-17T13:25:20.356Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:31.132 [2024-11-17T13:25:20.356Z] =================================================================================================================== 00:15:31.132 [2024-11-17T13:25:20.356Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:31.132 13:25:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:31.132 13:25:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81885' 00:15:31.132 13:25:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 81885 00:15:31.132 [2024-11-17 13:25:20.279894] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:31.132 13:25:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 81885 00:15:31.132 [2024-11-17 13:25:20.280043] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:31.132 [2024-11-17 13:25:20.280116] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:31.132 [2024-11-17 13:25:20.280131] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:15:31.701 [2024-11-17 13:25:20.678605] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:32.660 ************************************ 00:15:32.660 END TEST raid5f_rebuild_test_sb 00:15:32.660 ************************************ 00:15:32.660 13:25:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:15:32.660 00:15:32.660 real 0m23.255s 00:15:32.660 user 0m29.404s 00:15:32.660 sys 0m2.911s 00:15:32.660 13:25:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:32.660 13:25:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.660 13:25:21 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:15:32.660 13:25:21 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 4 false 00:15:32.660 13:25:21 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:32.660 13:25:21 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:32.660 13:25:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:32.920 ************************************ 00:15:32.920 START TEST raid5f_state_function_test 00:15:32.920 ************************************ 00:15:32.920 13:25:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 4 false 00:15:32.920 13:25:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:15:32.920 13:25:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:15:32.920 13:25:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:15:32.920 13:25:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:15:32.920 13:25:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:15:32.920 13:25:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:32.920 13:25:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:15:32.920 13:25:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:32.920 13:25:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:32.920 13:25:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:15:32.920 13:25:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:32.920 13:25:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:32.920 13:25:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:15:32.920 13:25:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:32.920 13:25:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:32.920 13:25:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:15:32.920 13:25:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:32.920 13:25:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:32.920 13:25:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:32.920 13:25:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:15:32.920 13:25:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:15:32.920 13:25:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:15:32.920 13:25:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:15:32.920 13:25:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:15:32.920 13:25:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:15:32.920 13:25:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:15:32.920 13:25:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:15:32.920 13:25:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:15:32.920 13:25:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:15:32.920 13:25:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=82638 00:15:32.920 13:25:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:15:32.920 Process raid pid: 82638 00:15:32.920 13:25:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 82638' 00:15:32.920 13:25:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 82638 00:15:32.920 13:25:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 82638 ']' 00:15:32.920 13:25:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:32.920 13:25:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:32.920 13:25:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:32.920 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:32.920 13:25:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:32.920 13:25:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.920 [2024-11-17 13:25:21.995529] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:15:32.920 [2024-11-17 13:25:21.995743] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:33.179 [2024-11-17 13:25:22.172864] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:33.179 [2024-11-17 13:25:22.299854] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:33.438 [2024-11-17 13:25:22.521341] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:33.438 [2024-11-17 13:25:22.521461] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:33.698 13:25:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:33.698 13:25:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:15:33.698 13:25:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:33.698 13:25:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.698 13:25:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.698 [2024-11-17 13:25:22.858844] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:33.698 [2024-11-17 13:25:22.858901] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:33.698 [2024-11-17 13:25:22.858911] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:33.698 [2024-11-17 13:25:22.858929] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:33.698 [2024-11-17 13:25:22.858935] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:33.698 [2024-11-17 13:25:22.858945] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:33.698 [2024-11-17 13:25:22.858951] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:33.698 [2024-11-17 13:25:22.858961] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:33.698 13:25:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.698 13:25:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:33.698 13:25:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:33.698 13:25:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:33.698 13:25:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:33.698 13:25:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:33.698 13:25:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:33.698 13:25:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:33.698 13:25:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:33.698 13:25:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:33.698 13:25:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:33.698 13:25:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:33.698 13:25:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:33.698 13:25:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.698 13:25:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.698 13:25:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.698 13:25:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:33.698 "name": "Existed_Raid", 00:15:33.698 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:33.698 "strip_size_kb": 64, 00:15:33.698 "state": "configuring", 00:15:33.698 "raid_level": "raid5f", 00:15:33.698 "superblock": false, 00:15:33.698 "num_base_bdevs": 4, 00:15:33.698 "num_base_bdevs_discovered": 0, 00:15:33.698 "num_base_bdevs_operational": 4, 00:15:33.698 "base_bdevs_list": [ 00:15:33.698 { 00:15:33.698 "name": "BaseBdev1", 00:15:33.698 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:33.698 "is_configured": false, 00:15:33.698 "data_offset": 0, 00:15:33.698 "data_size": 0 00:15:33.698 }, 00:15:33.698 { 00:15:33.698 "name": "BaseBdev2", 00:15:33.698 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:33.698 "is_configured": false, 00:15:33.698 "data_offset": 0, 00:15:33.698 "data_size": 0 00:15:33.698 }, 00:15:33.698 { 00:15:33.698 "name": "BaseBdev3", 00:15:33.698 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:33.698 "is_configured": false, 00:15:33.698 "data_offset": 0, 00:15:33.698 "data_size": 0 00:15:33.698 }, 00:15:33.698 { 00:15:33.698 "name": "BaseBdev4", 00:15:33.698 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:33.698 "is_configured": false, 00:15:33.698 "data_offset": 0, 00:15:33.698 "data_size": 0 00:15:33.698 } 00:15:33.698 ] 00:15:33.698 }' 00:15:33.698 13:25:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:33.698 13:25:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.268 13:25:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:34.268 13:25:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.268 13:25:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.268 [2024-11-17 13:25:23.298156] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:34.268 [2024-11-17 13:25:23.298265] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:15:34.268 13:25:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.268 13:25:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:34.268 13:25:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.268 13:25:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.268 [2024-11-17 13:25:23.310140] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:34.268 [2024-11-17 13:25:23.310233] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:34.268 [2024-11-17 13:25:23.310261] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:34.268 [2024-11-17 13:25:23.310285] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:34.268 [2024-11-17 13:25:23.310304] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:34.268 [2024-11-17 13:25:23.310331] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:34.268 [2024-11-17 13:25:23.310349] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:34.268 [2024-11-17 13:25:23.310393] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:34.268 13:25:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.268 13:25:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:34.268 13:25:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.268 13:25:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.268 [2024-11-17 13:25:23.361592] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:34.268 BaseBdev1 00:15:34.268 13:25:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.268 13:25:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:15:34.268 13:25:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:34.268 13:25:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:34.268 13:25:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:34.268 13:25:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:34.268 13:25:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:34.268 13:25:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:34.268 13:25:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.268 13:25:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.268 13:25:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.268 13:25:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:34.268 13:25:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.268 13:25:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.268 [ 00:15:34.268 { 00:15:34.268 "name": "BaseBdev1", 00:15:34.268 "aliases": [ 00:15:34.268 "783d2cbc-22eb-487d-acb7-56bfc53031b4" 00:15:34.268 ], 00:15:34.268 "product_name": "Malloc disk", 00:15:34.268 "block_size": 512, 00:15:34.268 "num_blocks": 65536, 00:15:34.268 "uuid": "783d2cbc-22eb-487d-acb7-56bfc53031b4", 00:15:34.268 "assigned_rate_limits": { 00:15:34.268 "rw_ios_per_sec": 0, 00:15:34.268 "rw_mbytes_per_sec": 0, 00:15:34.268 "r_mbytes_per_sec": 0, 00:15:34.268 "w_mbytes_per_sec": 0 00:15:34.268 }, 00:15:34.268 "claimed": true, 00:15:34.268 "claim_type": "exclusive_write", 00:15:34.268 "zoned": false, 00:15:34.268 "supported_io_types": { 00:15:34.268 "read": true, 00:15:34.268 "write": true, 00:15:34.268 "unmap": true, 00:15:34.268 "flush": true, 00:15:34.268 "reset": true, 00:15:34.268 "nvme_admin": false, 00:15:34.268 "nvme_io": false, 00:15:34.268 "nvme_io_md": false, 00:15:34.268 "write_zeroes": true, 00:15:34.268 "zcopy": true, 00:15:34.268 "get_zone_info": false, 00:15:34.268 "zone_management": false, 00:15:34.268 "zone_append": false, 00:15:34.268 "compare": false, 00:15:34.268 "compare_and_write": false, 00:15:34.268 "abort": true, 00:15:34.268 "seek_hole": false, 00:15:34.268 "seek_data": false, 00:15:34.268 "copy": true, 00:15:34.268 "nvme_iov_md": false 00:15:34.268 }, 00:15:34.268 "memory_domains": [ 00:15:34.268 { 00:15:34.268 "dma_device_id": "system", 00:15:34.268 "dma_device_type": 1 00:15:34.268 }, 00:15:34.268 { 00:15:34.268 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:34.268 "dma_device_type": 2 00:15:34.268 } 00:15:34.268 ], 00:15:34.268 "driver_specific": {} 00:15:34.268 } 00:15:34.268 ] 00:15:34.268 13:25:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.268 13:25:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:34.268 13:25:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:34.268 13:25:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:34.268 13:25:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:34.268 13:25:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:34.268 13:25:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:34.268 13:25:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:34.268 13:25:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:34.268 13:25:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:34.268 13:25:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:34.268 13:25:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:34.268 13:25:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:34.268 13:25:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.268 13:25:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:34.268 13:25:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.268 13:25:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.268 13:25:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:34.268 "name": "Existed_Raid", 00:15:34.268 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.268 "strip_size_kb": 64, 00:15:34.268 "state": "configuring", 00:15:34.268 "raid_level": "raid5f", 00:15:34.268 "superblock": false, 00:15:34.268 "num_base_bdevs": 4, 00:15:34.268 "num_base_bdevs_discovered": 1, 00:15:34.268 "num_base_bdevs_operational": 4, 00:15:34.268 "base_bdevs_list": [ 00:15:34.268 { 00:15:34.268 "name": "BaseBdev1", 00:15:34.268 "uuid": "783d2cbc-22eb-487d-acb7-56bfc53031b4", 00:15:34.268 "is_configured": true, 00:15:34.268 "data_offset": 0, 00:15:34.268 "data_size": 65536 00:15:34.268 }, 00:15:34.268 { 00:15:34.268 "name": "BaseBdev2", 00:15:34.268 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.268 "is_configured": false, 00:15:34.268 "data_offset": 0, 00:15:34.268 "data_size": 0 00:15:34.268 }, 00:15:34.268 { 00:15:34.268 "name": "BaseBdev3", 00:15:34.268 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.268 "is_configured": false, 00:15:34.268 "data_offset": 0, 00:15:34.268 "data_size": 0 00:15:34.268 }, 00:15:34.268 { 00:15:34.268 "name": "BaseBdev4", 00:15:34.269 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.269 "is_configured": false, 00:15:34.269 "data_offset": 0, 00:15:34.269 "data_size": 0 00:15:34.269 } 00:15:34.269 ] 00:15:34.269 }' 00:15:34.269 13:25:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:34.269 13:25:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.838 13:25:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:34.838 13:25:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.838 13:25:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.838 [2024-11-17 13:25:23.868768] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:34.838 [2024-11-17 13:25:23.868825] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:15:34.838 13:25:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.838 13:25:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:34.838 13:25:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.838 13:25:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.838 [2024-11-17 13:25:23.880789] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:34.838 [2024-11-17 13:25:23.882617] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:34.838 [2024-11-17 13:25:23.882662] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:34.838 [2024-11-17 13:25:23.882672] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:34.838 [2024-11-17 13:25:23.882683] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:34.838 [2024-11-17 13:25:23.882690] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:34.838 [2024-11-17 13:25:23.882698] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:34.838 13:25:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.838 13:25:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:15:34.838 13:25:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:34.838 13:25:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:34.838 13:25:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:34.838 13:25:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:34.838 13:25:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:34.838 13:25:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:34.838 13:25:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:34.838 13:25:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:34.838 13:25:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:34.838 13:25:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:34.838 13:25:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:34.838 13:25:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:34.838 13:25:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.838 13:25:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:34.838 13:25:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.838 13:25:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.839 13:25:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:34.839 "name": "Existed_Raid", 00:15:34.839 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.839 "strip_size_kb": 64, 00:15:34.839 "state": "configuring", 00:15:34.839 "raid_level": "raid5f", 00:15:34.839 "superblock": false, 00:15:34.839 "num_base_bdevs": 4, 00:15:34.839 "num_base_bdevs_discovered": 1, 00:15:34.839 "num_base_bdevs_operational": 4, 00:15:34.839 "base_bdevs_list": [ 00:15:34.839 { 00:15:34.839 "name": "BaseBdev1", 00:15:34.839 "uuid": "783d2cbc-22eb-487d-acb7-56bfc53031b4", 00:15:34.839 "is_configured": true, 00:15:34.839 "data_offset": 0, 00:15:34.839 "data_size": 65536 00:15:34.839 }, 00:15:34.839 { 00:15:34.839 "name": "BaseBdev2", 00:15:34.839 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.839 "is_configured": false, 00:15:34.839 "data_offset": 0, 00:15:34.839 "data_size": 0 00:15:34.839 }, 00:15:34.839 { 00:15:34.839 "name": "BaseBdev3", 00:15:34.839 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.839 "is_configured": false, 00:15:34.839 "data_offset": 0, 00:15:34.839 "data_size": 0 00:15:34.839 }, 00:15:34.839 { 00:15:34.839 "name": "BaseBdev4", 00:15:34.839 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.839 "is_configured": false, 00:15:34.839 "data_offset": 0, 00:15:34.839 "data_size": 0 00:15:34.839 } 00:15:34.839 ] 00:15:34.839 }' 00:15:34.839 13:25:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:34.839 13:25:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.409 13:25:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:35.409 13:25:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.409 13:25:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.409 [2024-11-17 13:25:24.385555] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:35.409 BaseBdev2 00:15:35.409 13:25:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.409 13:25:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:15:35.409 13:25:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:35.409 13:25:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:35.409 13:25:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:35.409 13:25:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:35.409 13:25:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:35.409 13:25:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:35.409 13:25:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.409 13:25:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.409 13:25:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.409 13:25:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:35.409 13:25:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.409 13:25:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.409 [ 00:15:35.409 { 00:15:35.409 "name": "BaseBdev2", 00:15:35.409 "aliases": [ 00:15:35.409 "dfce2e9f-8d68-46b9-8154-d52981790336" 00:15:35.409 ], 00:15:35.409 "product_name": "Malloc disk", 00:15:35.409 "block_size": 512, 00:15:35.409 "num_blocks": 65536, 00:15:35.409 "uuid": "dfce2e9f-8d68-46b9-8154-d52981790336", 00:15:35.409 "assigned_rate_limits": { 00:15:35.409 "rw_ios_per_sec": 0, 00:15:35.409 "rw_mbytes_per_sec": 0, 00:15:35.409 "r_mbytes_per_sec": 0, 00:15:35.409 "w_mbytes_per_sec": 0 00:15:35.409 }, 00:15:35.409 "claimed": true, 00:15:35.409 "claim_type": "exclusive_write", 00:15:35.409 "zoned": false, 00:15:35.409 "supported_io_types": { 00:15:35.409 "read": true, 00:15:35.409 "write": true, 00:15:35.409 "unmap": true, 00:15:35.409 "flush": true, 00:15:35.409 "reset": true, 00:15:35.409 "nvme_admin": false, 00:15:35.409 "nvme_io": false, 00:15:35.409 "nvme_io_md": false, 00:15:35.409 "write_zeroes": true, 00:15:35.409 "zcopy": true, 00:15:35.409 "get_zone_info": false, 00:15:35.409 "zone_management": false, 00:15:35.409 "zone_append": false, 00:15:35.409 "compare": false, 00:15:35.409 "compare_and_write": false, 00:15:35.409 "abort": true, 00:15:35.409 "seek_hole": false, 00:15:35.409 "seek_data": false, 00:15:35.409 "copy": true, 00:15:35.409 "nvme_iov_md": false 00:15:35.409 }, 00:15:35.409 "memory_domains": [ 00:15:35.409 { 00:15:35.409 "dma_device_id": "system", 00:15:35.409 "dma_device_type": 1 00:15:35.409 }, 00:15:35.409 { 00:15:35.409 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:35.409 "dma_device_type": 2 00:15:35.409 } 00:15:35.409 ], 00:15:35.409 "driver_specific": {} 00:15:35.409 } 00:15:35.409 ] 00:15:35.409 13:25:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.409 13:25:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:35.409 13:25:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:35.409 13:25:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:35.409 13:25:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:35.409 13:25:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:35.409 13:25:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:35.409 13:25:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:35.409 13:25:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:35.409 13:25:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:35.409 13:25:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:35.409 13:25:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:35.409 13:25:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:35.409 13:25:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:35.409 13:25:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:35.409 13:25:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:35.409 13:25:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.409 13:25:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.409 13:25:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.409 13:25:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:35.409 "name": "Existed_Raid", 00:15:35.409 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:35.409 "strip_size_kb": 64, 00:15:35.409 "state": "configuring", 00:15:35.409 "raid_level": "raid5f", 00:15:35.409 "superblock": false, 00:15:35.409 "num_base_bdevs": 4, 00:15:35.409 "num_base_bdevs_discovered": 2, 00:15:35.409 "num_base_bdevs_operational": 4, 00:15:35.409 "base_bdevs_list": [ 00:15:35.409 { 00:15:35.409 "name": "BaseBdev1", 00:15:35.409 "uuid": "783d2cbc-22eb-487d-acb7-56bfc53031b4", 00:15:35.409 "is_configured": true, 00:15:35.409 "data_offset": 0, 00:15:35.409 "data_size": 65536 00:15:35.409 }, 00:15:35.409 { 00:15:35.409 "name": "BaseBdev2", 00:15:35.409 "uuid": "dfce2e9f-8d68-46b9-8154-d52981790336", 00:15:35.409 "is_configured": true, 00:15:35.409 "data_offset": 0, 00:15:35.409 "data_size": 65536 00:15:35.409 }, 00:15:35.409 { 00:15:35.409 "name": "BaseBdev3", 00:15:35.409 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:35.409 "is_configured": false, 00:15:35.409 "data_offset": 0, 00:15:35.409 "data_size": 0 00:15:35.409 }, 00:15:35.409 { 00:15:35.409 "name": "BaseBdev4", 00:15:35.409 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:35.409 "is_configured": false, 00:15:35.409 "data_offset": 0, 00:15:35.409 "data_size": 0 00:15:35.409 } 00:15:35.409 ] 00:15:35.409 }' 00:15:35.409 13:25:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:35.409 13:25:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.670 13:25:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:35.670 13:25:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.670 13:25:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.670 [2024-11-17 13:25:24.875959] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:35.670 BaseBdev3 00:15:35.670 13:25:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.670 13:25:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:15:35.670 13:25:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:35.670 13:25:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:35.670 13:25:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:35.670 13:25:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:35.670 13:25:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:35.670 13:25:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:35.670 13:25:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.670 13:25:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.670 13:25:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.670 13:25:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:35.670 13:25:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.670 13:25:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.930 [ 00:15:35.930 { 00:15:35.930 "name": "BaseBdev3", 00:15:35.930 "aliases": [ 00:15:35.930 "ac2a9691-4a8f-48b2-a0c1-6e60efdcece0" 00:15:35.930 ], 00:15:35.930 "product_name": "Malloc disk", 00:15:35.930 "block_size": 512, 00:15:35.930 "num_blocks": 65536, 00:15:35.930 "uuid": "ac2a9691-4a8f-48b2-a0c1-6e60efdcece0", 00:15:35.930 "assigned_rate_limits": { 00:15:35.930 "rw_ios_per_sec": 0, 00:15:35.930 "rw_mbytes_per_sec": 0, 00:15:35.930 "r_mbytes_per_sec": 0, 00:15:35.930 "w_mbytes_per_sec": 0 00:15:35.930 }, 00:15:35.930 "claimed": true, 00:15:35.930 "claim_type": "exclusive_write", 00:15:35.930 "zoned": false, 00:15:35.930 "supported_io_types": { 00:15:35.930 "read": true, 00:15:35.930 "write": true, 00:15:35.930 "unmap": true, 00:15:35.930 "flush": true, 00:15:35.930 "reset": true, 00:15:35.930 "nvme_admin": false, 00:15:35.930 "nvme_io": false, 00:15:35.930 "nvme_io_md": false, 00:15:35.930 "write_zeroes": true, 00:15:35.930 "zcopy": true, 00:15:35.930 "get_zone_info": false, 00:15:35.930 "zone_management": false, 00:15:35.930 "zone_append": false, 00:15:35.930 "compare": false, 00:15:35.930 "compare_and_write": false, 00:15:35.930 "abort": true, 00:15:35.930 "seek_hole": false, 00:15:35.930 "seek_data": false, 00:15:35.930 "copy": true, 00:15:35.930 "nvme_iov_md": false 00:15:35.930 }, 00:15:35.930 "memory_domains": [ 00:15:35.930 { 00:15:35.930 "dma_device_id": "system", 00:15:35.930 "dma_device_type": 1 00:15:35.930 }, 00:15:35.930 { 00:15:35.930 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:35.930 "dma_device_type": 2 00:15:35.930 } 00:15:35.930 ], 00:15:35.930 "driver_specific": {} 00:15:35.930 } 00:15:35.930 ] 00:15:35.930 13:25:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.930 13:25:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:35.930 13:25:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:35.930 13:25:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:35.930 13:25:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:35.930 13:25:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:35.930 13:25:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:35.930 13:25:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:35.930 13:25:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:35.930 13:25:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:35.930 13:25:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:35.930 13:25:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:35.930 13:25:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:35.930 13:25:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:35.930 13:25:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:35.931 13:25:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:35.931 13:25:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.931 13:25:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.931 13:25:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.931 13:25:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:35.931 "name": "Existed_Raid", 00:15:35.931 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:35.931 "strip_size_kb": 64, 00:15:35.931 "state": "configuring", 00:15:35.931 "raid_level": "raid5f", 00:15:35.931 "superblock": false, 00:15:35.931 "num_base_bdevs": 4, 00:15:35.931 "num_base_bdevs_discovered": 3, 00:15:35.931 "num_base_bdevs_operational": 4, 00:15:35.931 "base_bdevs_list": [ 00:15:35.931 { 00:15:35.931 "name": "BaseBdev1", 00:15:35.931 "uuid": "783d2cbc-22eb-487d-acb7-56bfc53031b4", 00:15:35.931 "is_configured": true, 00:15:35.931 "data_offset": 0, 00:15:35.931 "data_size": 65536 00:15:35.931 }, 00:15:35.931 { 00:15:35.931 "name": "BaseBdev2", 00:15:35.931 "uuid": "dfce2e9f-8d68-46b9-8154-d52981790336", 00:15:35.931 "is_configured": true, 00:15:35.931 "data_offset": 0, 00:15:35.931 "data_size": 65536 00:15:35.931 }, 00:15:35.931 { 00:15:35.931 "name": "BaseBdev3", 00:15:35.931 "uuid": "ac2a9691-4a8f-48b2-a0c1-6e60efdcece0", 00:15:35.931 "is_configured": true, 00:15:35.931 "data_offset": 0, 00:15:35.931 "data_size": 65536 00:15:35.931 }, 00:15:35.931 { 00:15:35.931 "name": "BaseBdev4", 00:15:35.931 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:35.931 "is_configured": false, 00:15:35.931 "data_offset": 0, 00:15:35.931 "data_size": 0 00:15:35.931 } 00:15:35.931 ] 00:15:35.931 }' 00:15:35.931 13:25:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:35.931 13:25:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.191 13:25:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:15:36.191 13:25:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.191 13:25:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.191 [2024-11-17 13:25:25.401320] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:36.191 [2024-11-17 13:25:25.401460] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:36.191 [2024-11-17 13:25:25.401518] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:15:36.191 [2024-11-17 13:25:25.401823] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:36.191 [2024-11-17 13:25:25.409177] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:36.191 [2024-11-17 13:25:25.409257] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:15:36.191 [2024-11-17 13:25:25.409590] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:36.191 BaseBdev4 00:15:36.191 13:25:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.191 13:25:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:15:36.191 13:25:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:15:36.191 13:25:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:36.191 13:25:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:36.191 13:25:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:36.191 13:25:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:36.191 13:25:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:36.191 13:25:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.191 13:25:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.451 13:25:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.451 13:25:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:36.451 13:25:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.451 13:25:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.451 [ 00:15:36.451 { 00:15:36.451 "name": "BaseBdev4", 00:15:36.451 "aliases": [ 00:15:36.451 "513c3bd7-0901-40e8-a0cb-cf084373b5e7" 00:15:36.451 ], 00:15:36.451 "product_name": "Malloc disk", 00:15:36.451 "block_size": 512, 00:15:36.451 "num_blocks": 65536, 00:15:36.451 "uuid": "513c3bd7-0901-40e8-a0cb-cf084373b5e7", 00:15:36.451 "assigned_rate_limits": { 00:15:36.451 "rw_ios_per_sec": 0, 00:15:36.451 "rw_mbytes_per_sec": 0, 00:15:36.451 "r_mbytes_per_sec": 0, 00:15:36.451 "w_mbytes_per_sec": 0 00:15:36.451 }, 00:15:36.451 "claimed": true, 00:15:36.451 "claim_type": "exclusive_write", 00:15:36.451 "zoned": false, 00:15:36.451 "supported_io_types": { 00:15:36.451 "read": true, 00:15:36.451 "write": true, 00:15:36.451 "unmap": true, 00:15:36.451 "flush": true, 00:15:36.451 "reset": true, 00:15:36.451 "nvme_admin": false, 00:15:36.451 "nvme_io": false, 00:15:36.451 "nvme_io_md": false, 00:15:36.451 "write_zeroes": true, 00:15:36.451 "zcopy": true, 00:15:36.451 "get_zone_info": false, 00:15:36.451 "zone_management": false, 00:15:36.451 "zone_append": false, 00:15:36.451 "compare": false, 00:15:36.451 "compare_and_write": false, 00:15:36.451 "abort": true, 00:15:36.451 "seek_hole": false, 00:15:36.451 "seek_data": false, 00:15:36.451 "copy": true, 00:15:36.451 "nvme_iov_md": false 00:15:36.451 }, 00:15:36.451 "memory_domains": [ 00:15:36.451 { 00:15:36.451 "dma_device_id": "system", 00:15:36.451 "dma_device_type": 1 00:15:36.451 }, 00:15:36.451 { 00:15:36.451 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:36.451 "dma_device_type": 2 00:15:36.451 } 00:15:36.451 ], 00:15:36.451 "driver_specific": {} 00:15:36.451 } 00:15:36.451 ] 00:15:36.451 13:25:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.451 13:25:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:36.451 13:25:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:36.451 13:25:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:36.451 13:25:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:15:36.451 13:25:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:36.451 13:25:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:36.451 13:25:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:36.451 13:25:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:36.451 13:25:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:36.451 13:25:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:36.451 13:25:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:36.451 13:25:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:36.451 13:25:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:36.451 13:25:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:36.451 13:25:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.451 13:25:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.451 13:25:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:36.451 13:25:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.452 13:25:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:36.452 "name": "Existed_Raid", 00:15:36.452 "uuid": "ffb80c88-6687-4a36-ae0a-32f22c70e026", 00:15:36.452 "strip_size_kb": 64, 00:15:36.452 "state": "online", 00:15:36.452 "raid_level": "raid5f", 00:15:36.452 "superblock": false, 00:15:36.452 "num_base_bdevs": 4, 00:15:36.452 "num_base_bdevs_discovered": 4, 00:15:36.452 "num_base_bdevs_operational": 4, 00:15:36.452 "base_bdevs_list": [ 00:15:36.452 { 00:15:36.452 "name": "BaseBdev1", 00:15:36.452 "uuid": "783d2cbc-22eb-487d-acb7-56bfc53031b4", 00:15:36.452 "is_configured": true, 00:15:36.452 "data_offset": 0, 00:15:36.452 "data_size": 65536 00:15:36.452 }, 00:15:36.452 { 00:15:36.452 "name": "BaseBdev2", 00:15:36.452 "uuid": "dfce2e9f-8d68-46b9-8154-d52981790336", 00:15:36.452 "is_configured": true, 00:15:36.452 "data_offset": 0, 00:15:36.452 "data_size": 65536 00:15:36.452 }, 00:15:36.452 { 00:15:36.452 "name": "BaseBdev3", 00:15:36.452 "uuid": "ac2a9691-4a8f-48b2-a0c1-6e60efdcece0", 00:15:36.452 "is_configured": true, 00:15:36.452 "data_offset": 0, 00:15:36.452 "data_size": 65536 00:15:36.452 }, 00:15:36.452 { 00:15:36.452 "name": "BaseBdev4", 00:15:36.452 "uuid": "513c3bd7-0901-40e8-a0cb-cf084373b5e7", 00:15:36.452 "is_configured": true, 00:15:36.452 "data_offset": 0, 00:15:36.452 "data_size": 65536 00:15:36.452 } 00:15:36.452 ] 00:15:36.452 }' 00:15:36.452 13:25:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:36.452 13:25:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.711 13:25:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:15:36.711 13:25:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:36.711 13:25:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:36.711 13:25:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:36.711 13:25:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:36.711 13:25:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:36.711 13:25:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:36.711 13:25:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:36.711 13:25:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.711 13:25:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.711 [2024-11-17 13:25:25.929114] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:36.972 13:25:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.972 13:25:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:36.972 "name": "Existed_Raid", 00:15:36.972 "aliases": [ 00:15:36.972 "ffb80c88-6687-4a36-ae0a-32f22c70e026" 00:15:36.972 ], 00:15:36.972 "product_name": "Raid Volume", 00:15:36.972 "block_size": 512, 00:15:36.972 "num_blocks": 196608, 00:15:36.972 "uuid": "ffb80c88-6687-4a36-ae0a-32f22c70e026", 00:15:36.972 "assigned_rate_limits": { 00:15:36.972 "rw_ios_per_sec": 0, 00:15:36.972 "rw_mbytes_per_sec": 0, 00:15:36.972 "r_mbytes_per_sec": 0, 00:15:36.972 "w_mbytes_per_sec": 0 00:15:36.972 }, 00:15:36.972 "claimed": false, 00:15:36.972 "zoned": false, 00:15:36.972 "supported_io_types": { 00:15:36.972 "read": true, 00:15:36.972 "write": true, 00:15:36.972 "unmap": false, 00:15:36.972 "flush": false, 00:15:36.972 "reset": true, 00:15:36.972 "nvme_admin": false, 00:15:36.972 "nvme_io": false, 00:15:36.972 "nvme_io_md": false, 00:15:36.972 "write_zeroes": true, 00:15:36.972 "zcopy": false, 00:15:36.972 "get_zone_info": false, 00:15:36.972 "zone_management": false, 00:15:36.972 "zone_append": false, 00:15:36.972 "compare": false, 00:15:36.972 "compare_and_write": false, 00:15:36.972 "abort": false, 00:15:36.972 "seek_hole": false, 00:15:36.972 "seek_data": false, 00:15:36.972 "copy": false, 00:15:36.972 "nvme_iov_md": false 00:15:36.972 }, 00:15:36.972 "driver_specific": { 00:15:36.972 "raid": { 00:15:36.972 "uuid": "ffb80c88-6687-4a36-ae0a-32f22c70e026", 00:15:36.972 "strip_size_kb": 64, 00:15:36.972 "state": "online", 00:15:36.972 "raid_level": "raid5f", 00:15:36.972 "superblock": false, 00:15:36.972 "num_base_bdevs": 4, 00:15:36.972 "num_base_bdevs_discovered": 4, 00:15:36.972 "num_base_bdevs_operational": 4, 00:15:36.972 "base_bdevs_list": [ 00:15:36.972 { 00:15:36.972 "name": "BaseBdev1", 00:15:36.972 "uuid": "783d2cbc-22eb-487d-acb7-56bfc53031b4", 00:15:36.972 "is_configured": true, 00:15:36.972 "data_offset": 0, 00:15:36.972 "data_size": 65536 00:15:36.972 }, 00:15:36.972 { 00:15:36.972 "name": "BaseBdev2", 00:15:36.972 "uuid": "dfce2e9f-8d68-46b9-8154-d52981790336", 00:15:36.972 "is_configured": true, 00:15:36.972 "data_offset": 0, 00:15:36.972 "data_size": 65536 00:15:36.972 }, 00:15:36.972 { 00:15:36.972 "name": "BaseBdev3", 00:15:36.972 "uuid": "ac2a9691-4a8f-48b2-a0c1-6e60efdcece0", 00:15:36.972 "is_configured": true, 00:15:36.972 "data_offset": 0, 00:15:36.972 "data_size": 65536 00:15:36.972 }, 00:15:36.972 { 00:15:36.972 "name": "BaseBdev4", 00:15:36.972 "uuid": "513c3bd7-0901-40e8-a0cb-cf084373b5e7", 00:15:36.972 "is_configured": true, 00:15:36.972 "data_offset": 0, 00:15:36.972 "data_size": 65536 00:15:36.972 } 00:15:36.972 ] 00:15:36.972 } 00:15:36.972 } 00:15:36.972 }' 00:15:36.972 13:25:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:36.972 13:25:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:15:36.972 BaseBdev2 00:15:36.972 BaseBdev3 00:15:36.972 BaseBdev4' 00:15:36.972 13:25:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:36.972 13:25:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:36.972 13:25:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:36.972 13:25:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:15:36.972 13:25:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.972 13:25:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.972 13:25:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:36.972 13:25:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.972 13:25:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:36.972 13:25:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:36.972 13:25:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:36.972 13:25:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:36.972 13:25:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:36.972 13:25:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.972 13:25:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.972 13:25:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.972 13:25:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:36.972 13:25:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:36.972 13:25:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:36.972 13:25:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:36.972 13:25:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:36.972 13:25:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.972 13:25:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.972 13:25:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.972 13:25:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:36.972 13:25:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:36.972 13:25:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:36.972 13:25:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:15:36.972 13:25:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.972 13:25:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.972 13:25:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:36.972 13:25:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.233 13:25:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:37.233 13:25:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:37.233 13:25:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:37.233 13:25:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.233 13:25:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.233 [2024-11-17 13:25:26.224471] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:37.233 13:25:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.233 13:25:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:15:37.233 13:25:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:15:37.233 13:25:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:37.233 13:25:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:15:37.233 13:25:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:15:37.233 13:25:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:15:37.233 13:25:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:37.233 13:25:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:37.233 13:25:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:37.233 13:25:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:37.233 13:25:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:37.233 13:25:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:37.233 13:25:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:37.233 13:25:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:37.233 13:25:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:37.233 13:25:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:37.233 13:25:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.233 13:25:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.233 13:25:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.233 13:25:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.233 13:25:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:37.233 "name": "Existed_Raid", 00:15:37.233 "uuid": "ffb80c88-6687-4a36-ae0a-32f22c70e026", 00:15:37.233 "strip_size_kb": 64, 00:15:37.233 "state": "online", 00:15:37.233 "raid_level": "raid5f", 00:15:37.233 "superblock": false, 00:15:37.233 "num_base_bdevs": 4, 00:15:37.233 "num_base_bdevs_discovered": 3, 00:15:37.233 "num_base_bdevs_operational": 3, 00:15:37.233 "base_bdevs_list": [ 00:15:37.233 { 00:15:37.233 "name": null, 00:15:37.233 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:37.233 "is_configured": false, 00:15:37.233 "data_offset": 0, 00:15:37.233 "data_size": 65536 00:15:37.233 }, 00:15:37.233 { 00:15:37.233 "name": "BaseBdev2", 00:15:37.233 "uuid": "dfce2e9f-8d68-46b9-8154-d52981790336", 00:15:37.233 "is_configured": true, 00:15:37.233 "data_offset": 0, 00:15:37.233 "data_size": 65536 00:15:37.233 }, 00:15:37.233 { 00:15:37.233 "name": "BaseBdev3", 00:15:37.233 "uuid": "ac2a9691-4a8f-48b2-a0c1-6e60efdcece0", 00:15:37.233 "is_configured": true, 00:15:37.233 "data_offset": 0, 00:15:37.233 "data_size": 65536 00:15:37.233 }, 00:15:37.233 { 00:15:37.233 "name": "BaseBdev4", 00:15:37.233 "uuid": "513c3bd7-0901-40e8-a0cb-cf084373b5e7", 00:15:37.233 "is_configured": true, 00:15:37.233 "data_offset": 0, 00:15:37.233 "data_size": 65536 00:15:37.233 } 00:15:37.233 ] 00:15:37.233 }' 00:15:37.233 13:25:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:37.233 13:25:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.803 13:25:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:15:37.803 13:25:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:37.803 13:25:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.803 13:25:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:37.803 13:25:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.803 13:25:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.803 13:25:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.803 13:25:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:37.803 13:25:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:37.803 13:25:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:15:37.803 13:25:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.803 13:25:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.803 [2024-11-17 13:25:26.811824] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:37.803 [2024-11-17 13:25:26.811971] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:37.803 [2024-11-17 13:25:26.905744] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:37.803 13:25:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.803 13:25:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:37.803 13:25:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:37.803 13:25:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.803 13:25:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:37.803 13:25:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.803 13:25:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.803 13:25:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.803 13:25:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:37.803 13:25:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:37.803 13:25:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:15:37.803 13:25:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.803 13:25:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.803 [2024-11-17 13:25:26.961680] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:38.063 13:25:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.063 13:25:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:38.063 13:25:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:38.063 13:25:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:38.063 13:25:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.063 13:25:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:38.063 13:25:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.063 13:25:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.063 13:25:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:38.063 13:25:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:38.063 13:25:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:15:38.063 13:25:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.063 13:25:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.063 [2024-11-17 13:25:27.116059] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:15:38.063 [2024-11-17 13:25:27.116227] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:15:38.063 13:25:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.063 13:25:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:38.063 13:25:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:38.063 13:25:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:38.063 13:25:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.063 13:25:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:15:38.063 13:25:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.063 13:25:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.063 13:25:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:15:38.063 13:25:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:15:38.063 13:25:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:15:38.063 13:25:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:15:38.063 13:25:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:38.063 13:25:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:38.063 13:25:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.063 13:25:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.323 BaseBdev2 00:15:38.323 13:25:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.323 13:25:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:15:38.323 13:25:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:38.323 13:25:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:38.323 13:25:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:38.323 13:25:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:38.323 13:25:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:38.323 13:25:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:38.323 13:25:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.323 13:25:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.323 13:25:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.323 13:25:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:38.323 13:25:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.323 13:25:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.323 [ 00:15:38.323 { 00:15:38.323 "name": "BaseBdev2", 00:15:38.323 "aliases": [ 00:15:38.323 "4babce34-7a37-4190-b320-08546b8b59b3" 00:15:38.323 ], 00:15:38.323 "product_name": "Malloc disk", 00:15:38.323 "block_size": 512, 00:15:38.323 "num_blocks": 65536, 00:15:38.323 "uuid": "4babce34-7a37-4190-b320-08546b8b59b3", 00:15:38.323 "assigned_rate_limits": { 00:15:38.323 "rw_ios_per_sec": 0, 00:15:38.323 "rw_mbytes_per_sec": 0, 00:15:38.323 "r_mbytes_per_sec": 0, 00:15:38.323 "w_mbytes_per_sec": 0 00:15:38.323 }, 00:15:38.323 "claimed": false, 00:15:38.323 "zoned": false, 00:15:38.323 "supported_io_types": { 00:15:38.323 "read": true, 00:15:38.323 "write": true, 00:15:38.323 "unmap": true, 00:15:38.323 "flush": true, 00:15:38.323 "reset": true, 00:15:38.323 "nvme_admin": false, 00:15:38.323 "nvme_io": false, 00:15:38.323 "nvme_io_md": false, 00:15:38.323 "write_zeroes": true, 00:15:38.323 "zcopy": true, 00:15:38.323 "get_zone_info": false, 00:15:38.323 "zone_management": false, 00:15:38.323 "zone_append": false, 00:15:38.323 "compare": false, 00:15:38.323 "compare_and_write": false, 00:15:38.323 "abort": true, 00:15:38.323 "seek_hole": false, 00:15:38.323 "seek_data": false, 00:15:38.323 "copy": true, 00:15:38.323 "nvme_iov_md": false 00:15:38.323 }, 00:15:38.323 "memory_domains": [ 00:15:38.323 { 00:15:38.323 "dma_device_id": "system", 00:15:38.323 "dma_device_type": 1 00:15:38.323 }, 00:15:38.323 { 00:15:38.323 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:38.323 "dma_device_type": 2 00:15:38.323 } 00:15:38.323 ], 00:15:38.323 "driver_specific": {} 00:15:38.323 } 00:15:38.323 ] 00:15:38.323 13:25:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.323 13:25:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:38.323 13:25:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:38.323 13:25:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:38.323 13:25:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:38.323 13:25:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.323 13:25:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.323 BaseBdev3 00:15:38.323 13:25:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.323 13:25:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:15:38.323 13:25:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:38.323 13:25:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:38.323 13:25:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:38.323 13:25:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:38.323 13:25:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:38.323 13:25:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:38.323 13:25:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.323 13:25:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.323 13:25:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.323 13:25:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:38.323 13:25:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.323 13:25:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.323 [ 00:15:38.323 { 00:15:38.323 "name": "BaseBdev3", 00:15:38.323 "aliases": [ 00:15:38.323 "ef7b4ee9-7a3c-41bc-b075-6883539709fe" 00:15:38.323 ], 00:15:38.323 "product_name": "Malloc disk", 00:15:38.323 "block_size": 512, 00:15:38.323 "num_blocks": 65536, 00:15:38.323 "uuid": "ef7b4ee9-7a3c-41bc-b075-6883539709fe", 00:15:38.323 "assigned_rate_limits": { 00:15:38.323 "rw_ios_per_sec": 0, 00:15:38.323 "rw_mbytes_per_sec": 0, 00:15:38.323 "r_mbytes_per_sec": 0, 00:15:38.323 "w_mbytes_per_sec": 0 00:15:38.323 }, 00:15:38.323 "claimed": false, 00:15:38.323 "zoned": false, 00:15:38.323 "supported_io_types": { 00:15:38.323 "read": true, 00:15:38.323 "write": true, 00:15:38.323 "unmap": true, 00:15:38.323 "flush": true, 00:15:38.323 "reset": true, 00:15:38.323 "nvme_admin": false, 00:15:38.323 "nvme_io": false, 00:15:38.323 "nvme_io_md": false, 00:15:38.323 "write_zeroes": true, 00:15:38.323 "zcopy": true, 00:15:38.323 "get_zone_info": false, 00:15:38.323 "zone_management": false, 00:15:38.323 "zone_append": false, 00:15:38.323 "compare": false, 00:15:38.323 "compare_and_write": false, 00:15:38.323 "abort": true, 00:15:38.323 "seek_hole": false, 00:15:38.323 "seek_data": false, 00:15:38.323 "copy": true, 00:15:38.323 "nvme_iov_md": false 00:15:38.323 }, 00:15:38.323 "memory_domains": [ 00:15:38.323 { 00:15:38.323 "dma_device_id": "system", 00:15:38.323 "dma_device_type": 1 00:15:38.323 }, 00:15:38.323 { 00:15:38.323 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:38.323 "dma_device_type": 2 00:15:38.323 } 00:15:38.323 ], 00:15:38.323 "driver_specific": {} 00:15:38.323 } 00:15:38.323 ] 00:15:38.323 13:25:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.323 13:25:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:38.323 13:25:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:38.323 13:25:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:38.323 13:25:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:15:38.323 13:25:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.323 13:25:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.323 BaseBdev4 00:15:38.323 13:25:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.323 13:25:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:15:38.324 13:25:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:15:38.324 13:25:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:38.324 13:25:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:38.324 13:25:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:38.324 13:25:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:38.324 13:25:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:38.324 13:25:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.324 13:25:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.324 13:25:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.324 13:25:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:38.324 13:25:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.324 13:25:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.324 [ 00:15:38.324 { 00:15:38.324 "name": "BaseBdev4", 00:15:38.324 "aliases": [ 00:15:38.324 "b50cc012-84c2-4f2b-84da-3eb4b9cc0609" 00:15:38.324 ], 00:15:38.324 "product_name": "Malloc disk", 00:15:38.324 "block_size": 512, 00:15:38.324 "num_blocks": 65536, 00:15:38.324 "uuid": "b50cc012-84c2-4f2b-84da-3eb4b9cc0609", 00:15:38.324 "assigned_rate_limits": { 00:15:38.324 "rw_ios_per_sec": 0, 00:15:38.324 "rw_mbytes_per_sec": 0, 00:15:38.324 "r_mbytes_per_sec": 0, 00:15:38.324 "w_mbytes_per_sec": 0 00:15:38.324 }, 00:15:38.324 "claimed": false, 00:15:38.324 "zoned": false, 00:15:38.324 "supported_io_types": { 00:15:38.324 "read": true, 00:15:38.324 "write": true, 00:15:38.324 "unmap": true, 00:15:38.324 "flush": true, 00:15:38.324 "reset": true, 00:15:38.324 "nvme_admin": false, 00:15:38.324 "nvme_io": false, 00:15:38.324 "nvme_io_md": false, 00:15:38.324 "write_zeroes": true, 00:15:38.324 "zcopy": true, 00:15:38.324 "get_zone_info": false, 00:15:38.324 "zone_management": false, 00:15:38.324 "zone_append": false, 00:15:38.324 "compare": false, 00:15:38.324 "compare_and_write": false, 00:15:38.324 "abort": true, 00:15:38.324 "seek_hole": false, 00:15:38.324 "seek_data": false, 00:15:38.324 "copy": true, 00:15:38.324 "nvme_iov_md": false 00:15:38.324 }, 00:15:38.324 "memory_domains": [ 00:15:38.324 { 00:15:38.324 "dma_device_id": "system", 00:15:38.324 "dma_device_type": 1 00:15:38.324 }, 00:15:38.324 { 00:15:38.324 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:38.324 "dma_device_type": 2 00:15:38.324 } 00:15:38.324 ], 00:15:38.324 "driver_specific": {} 00:15:38.324 } 00:15:38.324 ] 00:15:38.324 13:25:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.324 13:25:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:38.324 13:25:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:38.324 13:25:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:38.324 13:25:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:38.324 13:25:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.324 13:25:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.324 [2024-11-17 13:25:27.512905] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:38.324 [2024-11-17 13:25:27.512951] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:38.324 [2024-11-17 13:25:27.512970] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:38.324 [2024-11-17 13:25:27.514722] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:38.324 [2024-11-17 13:25:27.514776] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:38.324 13:25:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.324 13:25:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:38.324 13:25:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:38.324 13:25:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:38.324 13:25:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:38.324 13:25:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:38.324 13:25:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:38.324 13:25:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:38.324 13:25:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:38.324 13:25:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:38.324 13:25:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:38.324 13:25:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:38.324 13:25:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:38.324 13:25:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.324 13:25:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.324 13:25:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.584 13:25:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:38.584 "name": "Existed_Raid", 00:15:38.584 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:38.584 "strip_size_kb": 64, 00:15:38.584 "state": "configuring", 00:15:38.584 "raid_level": "raid5f", 00:15:38.584 "superblock": false, 00:15:38.584 "num_base_bdevs": 4, 00:15:38.584 "num_base_bdevs_discovered": 3, 00:15:38.584 "num_base_bdevs_operational": 4, 00:15:38.584 "base_bdevs_list": [ 00:15:38.584 { 00:15:38.584 "name": "BaseBdev1", 00:15:38.584 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:38.584 "is_configured": false, 00:15:38.584 "data_offset": 0, 00:15:38.584 "data_size": 0 00:15:38.584 }, 00:15:38.584 { 00:15:38.584 "name": "BaseBdev2", 00:15:38.584 "uuid": "4babce34-7a37-4190-b320-08546b8b59b3", 00:15:38.584 "is_configured": true, 00:15:38.584 "data_offset": 0, 00:15:38.584 "data_size": 65536 00:15:38.584 }, 00:15:38.584 { 00:15:38.584 "name": "BaseBdev3", 00:15:38.584 "uuid": "ef7b4ee9-7a3c-41bc-b075-6883539709fe", 00:15:38.584 "is_configured": true, 00:15:38.584 "data_offset": 0, 00:15:38.584 "data_size": 65536 00:15:38.584 }, 00:15:38.584 { 00:15:38.584 "name": "BaseBdev4", 00:15:38.584 "uuid": "b50cc012-84c2-4f2b-84da-3eb4b9cc0609", 00:15:38.584 "is_configured": true, 00:15:38.584 "data_offset": 0, 00:15:38.584 "data_size": 65536 00:15:38.584 } 00:15:38.584 ] 00:15:38.584 }' 00:15:38.584 13:25:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:38.584 13:25:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.844 13:25:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:38.844 13:25:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.844 13:25:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.844 [2024-11-17 13:25:27.936231] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:38.844 13:25:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.844 13:25:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:38.844 13:25:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:38.844 13:25:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:38.844 13:25:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:38.844 13:25:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:38.844 13:25:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:38.844 13:25:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:38.844 13:25:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:38.844 13:25:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:38.844 13:25:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:38.844 13:25:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:38.844 13:25:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:38.844 13:25:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.844 13:25:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.844 13:25:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.844 13:25:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:38.844 "name": "Existed_Raid", 00:15:38.844 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:38.844 "strip_size_kb": 64, 00:15:38.844 "state": "configuring", 00:15:38.844 "raid_level": "raid5f", 00:15:38.844 "superblock": false, 00:15:38.844 "num_base_bdevs": 4, 00:15:38.844 "num_base_bdevs_discovered": 2, 00:15:38.844 "num_base_bdevs_operational": 4, 00:15:38.844 "base_bdevs_list": [ 00:15:38.844 { 00:15:38.844 "name": "BaseBdev1", 00:15:38.844 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:38.844 "is_configured": false, 00:15:38.844 "data_offset": 0, 00:15:38.844 "data_size": 0 00:15:38.844 }, 00:15:38.844 { 00:15:38.844 "name": null, 00:15:38.844 "uuid": "4babce34-7a37-4190-b320-08546b8b59b3", 00:15:38.844 "is_configured": false, 00:15:38.844 "data_offset": 0, 00:15:38.844 "data_size": 65536 00:15:38.844 }, 00:15:38.844 { 00:15:38.844 "name": "BaseBdev3", 00:15:38.844 "uuid": "ef7b4ee9-7a3c-41bc-b075-6883539709fe", 00:15:38.844 "is_configured": true, 00:15:38.844 "data_offset": 0, 00:15:38.844 "data_size": 65536 00:15:38.844 }, 00:15:38.844 { 00:15:38.844 "name": "BaseBdev4", 00:15:38.844 "uuid": "b50cc012-84c2-4f2b-84da-3eb4b9cc0609", 00:15:38.844 "is_configured": true, 00:15:38.844 "data_offset": 0, 00:15:38.844 "data_size": 65536 00:15:38.844 } 00:15:38.844 ] 00:15:38.844 }' 00:15:38.844 13:25:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:38.844 13:25:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.413 13:25:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.413 13:25:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:39.413 13:25:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.413 13:25:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.413 13:25:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.413 13:25:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:15:39.413 13:25:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:39.413 13:25:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.413 13:25:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.413 [2024-11-17 13:25:28.414498] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:39.413 BaseBdev1 00:15:39.413 13:25:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.413 13:25:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:15:39.413 13:25:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:39.413 13:25:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:39.413 13:25:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:39.413 13:25:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:39.413 13:25:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:39.413 13:25:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:39.413 13:25:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.413 13:25:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.413 13:25:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.413 13:25:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:39.413 13:25:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.413 13:25:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.413 [ 00:15:39.413 { 00:15:39.413 "name": "BaseBdev1", 00:15:39.413 "aliases": [ 00:15:39.413 "28ec0504-7431-46a6-9fc9-eda49e434512" 00:15:39.413 ], 00:15:39.413 "product_name": "Malloc disk", 00:15:39.413 "block_size": 512, 00:15:39.413 "num_blocks": 65536, 00:15:39.413 "uuid": "28ec0504-7431-46a6-9fc9-eda49e434512", 00:15:39.413 "assigned_rate_limits": { 00:15:39.413 "rw_ios_per_sec": 0, 00:15:39.413 "rw_mbytes_per_sec": 0, 00:15:39.413 "r_mbytes_per_sec": 0, 00:15:39.413 "w_mbytes_per_sec": 0 00:15:39.413 }, 00:15:39.413 "claimed": true, 00:15:39.413 "claim_type": "exclusive_write", 00:15:39.413 "zoned": false, 00:15:39.413 "supported_io_types": { 00:15:39.413 "read": true, 00:15:39.413 "write": true, 00:15:39.413 "unmap": true, 00:15:39.413 "flush": true, 00:15:39.413 "reset": true, 00:15:39.413 "nvme_admin": false, 00:15:39.413 "nvme_io": false, 00:15:39.413 "nvme_io_md": false, 00:15:39.413 "write_zeroes": true, 00:15:39.413 "zcopy": true, 00:15:39.413 "get_zone_info": false, 00:15:39.413 "zone_management": false, 00:15:39.413 "zone_append": false, 00:15:39.413 "compare": false, 00:15:39.413 "compare_and_write": false, 00:15:39.413 "abort": true, 00:15:39.413 "seek_hole": false, 00:15:39.413 "seek_data": false, 00:15:39.413 "copy": true, 00:15:39.413 "nvme_iov_md": false 00:15:39.413 }, 00:15:39.413 "memory_domains": [ 00:15:39.413 { 00:15:39.413 "dma_device_id": "system", 00:15:39.413 "dma_device_type": 1 00:15:39.413 }, 00:15:39.413 { 00:15:39.413 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:39.413 "dma_device_type": 2 00:15:39.413 } 00:15:39.413 ], 00:15:39.413 "driver_specific": {} 00:15:39.413 } 00:15:39.413 ] 00:15:39.413 13:25:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.413 13:25:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:39.413 13:25:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:39.413 13:25:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:39.413 13:25:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:39.413 13:25:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:39.413 13:25:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:39.413 13:25:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:39.413 13:25:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:39.413 13:25:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:39.413 13:25:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:39.413 13:25:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:39.413 13:25:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.413 13:25:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:39.413 13:25:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.413 13:25:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.413 13:25:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.413 13:25:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:39.413 "name": "Existed_Raid", 00:15:39.413 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:39.413 "strip_size_kb": 64, 00:15:39.413 "state": "configuring", 00:15:39.413 "raid_level": "raid5f", 00:15:39.413 "superblock": false, 00:15:39.413 "num_base_bdevs": 4, 00:15:39.413 "num_base_bdevs_discovered": 3, 00:15:39.413 "num_base_bdevs_operational": 4, 00:15:39.413 "base_bdevs_list": [ 00:15:39.413 { 00:15:39.413 "name": "BaseBdev1", 00:15:39.413 "uuid": "28ec0504-7431-46a6-9fc9-eda49e434512", 00:15:39.413 "is_configured": true, 00:15:39.413 "data_offset": 0, 00:15:39.413 "data_size": 65536 00:15:39.414 }, 00:15:39.414 { 00:15:39.414 "name": null, 00:15:39.414 "uuid": "4babce34-7a37-4190-b320-08546b8b59b3", 00:15:39.414 "is_configured": false, 00:15:39.414 "data_offset": 0, 00:15:39.414 "data_size": 65536 00:15:39.414 }, 00:15:39.414 { 00:15:39.414 "name": "BaseBdev3", 00:15:39.414 "uuid": "ef7b4ee9-7a3c-41bc-b075-6883539709fe", 00:15:39.414 "is_configured": true, 00:15:39.414 "data_offset": 0, 00:15:39.414 "data_size": 65536 00:15:39.414 }, 00:15:39.414 { 00:15:39.414 "name": "BaseBdev4", 00:15:39.414 "uuid": "b50cc012-84c2-4f2b-84da-3eb4b9cc0609", 00:15:39.414 "is_configured": true, 00:15:39.414 "data_offset": 0, 00:15:39.414 "data_size": 65536 00:15:39.414 } 00:15:39.414 ] 00:15:39.414 }' 00:15:39.414 13:25:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:39.414 13:25:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.983 13:25:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.983 13:25:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.983 13:25:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.983 13:25:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:39.983 13:25:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.983 13:25:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:15:39.983 13:25:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:15:39.983 13:25:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.984 13:25:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.984 [2024-11-17 13:25:28.961799] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:39.984 13:25:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.984 13:25:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:39.984 13:25:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:39.984 13:25:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:39.984 13:25:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:39.984 13:25:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:39.984 13:25:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:39.984 13:25:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:39.984 13:25:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:39.984 13:25:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:39.984 13:25:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:39.984 13:25:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:39.984 13:25:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.984 13:25:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.984 13:25:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.984 13:25:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.984 13:25:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:39.984 "name": "Existed_Raid", 00:15:39.984 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:39.984 "strip_size_kb": 64, 00:15:39.984 "state": "configuring", 00:15:39.984 "raid_level": "raid5f", 00:15:39.984 "superblock": false, 00:15:39.984 "num_base_bdevs": 4, 00:15:39.984 "num_base_bdevs_discovered": 2, 00:15:39.984 "num_base_bdevs_operational": 4, 00:15:39.984 "base_bdevs_list": [ 00:15:39.984 { 00:15:39.984 "name": "BaseBdev1", 00:15:39.984 "uuid": "28ec0504-7431-46a6-9fc9-eda49e434512", 00:15:39.984 "is_configured": true, 00:15:39.984 "data_offset": 0, 00:15:39.984 "data_size": 65536 00:15:39.984 }, 00:15:39.984 { 00:15:39.984 "name": null, 00:15:39.984 "uuid": "4babce34-7a37-4190-b320-08546b8b59b3", 00:15:39.984 "is_configured": false, 00:15:39.984 "data_offset": 0, 00:15:39.984 "data_size": 65536 00:15:39.984 }, 00:15:39.984 { 00:15:39.984 "name": null, 00:15:39.984 "uuid": "ef7b4ee9-7a3c-41bc-b075-6883539709fe", 00:15:39.984 "is_configured": false, 00:15:39.984 "data_offset": 0, 00:15:39.984 "data_size": 65536 00:15:39.984 }, 00:15:39.984 { 00:15:39.984 "name": "BaseBdev4", 00:15:39.984 "uuid": "b50cc012-84c2-4f2b-84da-3eb4b9cc0609", 00:15:39.984 "is_configured": true, 00:15:39.984 "data_offset": 0, 00:15:39.984 "data_size": 65536 00:15:39.984 } 00:15:39.984 ] 00:15:39.984 }' 00:15:39.984 13:25:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:39.984 13:25:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.243 13:25:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:40.243 13:25:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:40.243 13:25:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.243 13:25:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.243 13:25:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.243 13:25:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:15:40.243 13:25:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:40.243 13:25:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.243 13:25:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.243 [2024-11-17 13:25:29.401041] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:40.243 13:25:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.243 13:25:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:40.243 13:25:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:40.243 13:25:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:40.243 13:25:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:40.243 13:25:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:40.243 13:25:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:40.243 13:25:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:40.243 13:25:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:40.243 13:25:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:40.243 13:25:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:40.243 13:25:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:40.243 13:25:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:40.243 13:25:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.243 13:25:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.243 13:25:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.243 13:25:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:40.243 "name": "Existed_Raid", 00:15:40.243 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:40.243 "strip_size_kb": 64, 00:15:40.243 "state": "configuring", 00:15:40.243 "raid_level": "raid5f", 00:15:40.243 "superblock": false, 00:15:40.243 "num_base_bdevs": 4, 00:15:40.243 "num_base_bdevs_discovered": 3, 00:15:40.243 "num_base_bdevs_operational": 4, 00:15:40.243 "base_bdevs_list": [ 00:15:40.243 { 00:15:40.243 "name": "BaseBdev1", 00:15:40.243 "uuid": "28ec0504-7431-46a6-9fc9-eda49e434512", 00:15:40.243 "is_configured": true, 00:15:40.243 "data_offset": 0, 00:15:40.243 "data_size": 65536 00:15:40.243 }, 00:15:40.243 { 00:15:40.243 "name": null, 00:15:40.243 "uuid": "4babce34-7a37-4190-b320-08546b8b59b3", 00:15:40.243 "is_configured": false, 00:15:40.243 "data_offset": 0, 00:15:40.243 "data_size": 65536 00:15:40.243 }, 00:15:40.243 { 00:15:40.243 "name": "BaseBdev3", 00:15:40.243 "uuid": "ef7b4ee9-7a3c-41bc-b075-6883539709fe", 00:15:40.243 "is_configured": true, 00:15:40.243 "data_offset": 0, 00:15:40.243 "data_size": 65536 00:15:40.243 }, 00:15:40.243 { 00:15:40.243 "name": "BaseBdev4", 00:15:40.243 "uuid": "b50cc012-84c2-4f2b-84da-3eb4b9cc0609", 00:15:40.243 "is_configured": true, 00:15:40.243 "data_offset": 0, 00:15:40.243 "data_size": 65536 00:15:40.243 } 00:15:40.243 ] 00:15:40.243 }' 00:15:40.243 13:25:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:40.243 13:25:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.812 13:25:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:40.812 13:25:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:40.812 13:25:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.812 13:25:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.812 13:25:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.812 13:25:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:15:40.812 13:25:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:40.812 13:25:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.812 13:25:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.812 [2024-11-17 13:25:29.812364] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:40.812 13:25:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.812 13:25:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:40.812 13:25:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:40.812 13:25:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:40.812 13:25:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:40.812 13:25:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:40.812 13:25:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:40.812 13:25:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:40.812 13:25:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:40.812 13:25:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:40.812 13:25:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:40.812 13:25:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:40.812 13:25:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:40.812 13:25:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.812 13:25:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.812 13:25:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.812 13:25:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:40.812 "name": "Existed_Raid", 00:15:40.812 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:40.812 "strip_size_kb": 64, 00:15:40.812 "state": "configuring", 00:15:40.812 "raid_level": "raid5f", 00:15:40.812 "superblock": false, 00:15:40.812 "num_base_bdevs": 4, 00:15:40.812 "num_base_bdevs_discovered": 2, 00:15:40.812 "num_base_bdevs_operational": 4, 00:15:40.812 "base_bdevs_list": [ 00:15:40.812 { 00:15:40.812 "name": null, 00:15:40.812 "uuid": "28ec0504-7431-46a6-9fc9-eda49e434512", 00:15:40.812 "is_configured": false, 00:15:40.812 "data_offset": 0, 00:15:40.812 "data_size": 65536 00:15:40.812 }, 00:15:40.812 { 00:15:40.812 "name": null, 00:15:40.812 "uuid": "4babce34-7a37-4190-b320-08546b8b59b3", 00:15:40.812 "is_configured": false, 00:15:40.812 "data_offset": 0, 00:15:40.812 "data_size": 65536 00:15:40.812 }, 00:15:40.812 { 00:15:40.812 "name": "BaseBdev3", 00:15:40.812 "uuid": "ef7b4ee9-7a3c-41bc-b075-6883539709fe", 00:15:40.812 "is_configured": true, 00:15:40.812 "data_offset": 0, 00:15:40.812 "data_size": 65536 00:15:40.812 }, 00:15:40.812 { 00:15:40.812 "name": "BaseBdev4", 00:15:40.812 "uuid": "b50cc012-84c2-4f2b-84da-3eb4b9cc0609", 00:15:40.812 "is_configured": true, 00:15:40.812 "data_offset": 0, 00:15:40.812 "data_size": 65536 00:15:40.812 } 00:15:40.813 ] 00:15:40.813 }' 00:15:40.813 13:25:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:40.813 13:25:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.381 13:25:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:41.381 13:25:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.381 13:25:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.381 13:25:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.381 13:25:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.381 13:25:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:15:41.381 13:25:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:41.381 13:25:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.381 13:25:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.381 [2024-11-17 13:25:30.403682] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:41.381 13:25:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.381 13:25:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:41.381 13:25:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:41.381 13:25:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:41.381 13:25:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:41.381 13:25:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:41.381 13:25:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:41.381 13:25:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:41.381 13:25:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:41.382 13:25:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:41.382 13:25:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:41.382 13:25:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:41.382 13:25:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.382 13:25:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.382 13:25:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.382 13:25:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.382 13:25:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:41.382 "name": "Existed_Raid", 00:15:41.382 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:41.382 "strip_size_kb": 64, 00:15:41.382 "state": "configuring", 00:15:41.382 "raid_level": "raid5f", 00:15:41.382 "superblock": false, 00:15:41.382 "num_base_bdevs": 4, 00:15:41.382 "num_base_bdevs_discovered": 3, 00:15:41.382 "num_base_bdevs_operational": 4, 00:15:41.382 "base_bdevs_list": [ 00:15:41.382 { 00:15:41.382 "name": null, 00:15:41.382 "uuid": "28ec0504-7431-46a6-9fc9-eda49e434512", 00:15:41.382 "is_configured": false, 00:15:41.382 "data_offset": 0, 00:15:41.382 "data_size": 65536 00:15:41.382 }, 00:15:41.382 { 00:15:41.382 "name": "BaseBdev2", 00:15:41.382 "uuid": "4babce34-7a37-4190-b320-08546b8b59b3", 00:15:41.382 "is_configured": true, 00:15:41.382 "data_offset": 0, 00:15:41.382 "data_size": 65536 00:15:41.382 }, 00:15:41.382 { 00:15:41.382 "name": "BaseBdev3", 00:15:41.382 "uuid": "ef7b4ee9-7a3c-41bc-b075-6883539709fe", 00:15:41.382 "is_configured": true, 00:15:41.382 "data_offset": 0, 00:15:41.382 "data_size": 65536 00:15:41.382 }, 00:15:41.382 { 00:15:41.382 "name": "BaseBdev4", 00:15:41.382 "uuid": "b50cc012-84c2-4f2b-84da-3eb4b9cc0609", 00:15:41.382 "is_configured": true, 00:15:41.382 "data_offset": 0, 00:15:41.382 "data_size": 65536 00:15:41.382 } 00:15:41.382 ] 00:15:41.382 }' 00:15:41.382 13:25:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:41.382 13:25:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.959 13:25:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.959 13:25:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.959 13:25:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.959 13:25:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:41.959 13:25:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.959 13:25:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:15:41.959 13:25:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:41.959 13:25:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.959 13:25:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.959 13:25:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.959 13:25:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.959 13:25:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 28ec0504-7431-46a6-9fc9-eda49e434512 00:15:41.959 13:25:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.959 13:25:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.959 [2024-11-17 13:25:31.005113] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:41.959 [2024-11-17 13:25:31.005245] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:41.959 [2024-11-17 13:25:31.005272] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:15:41.959 [2024-11-17 13:25:31.005555] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:15:41.959 [2024-11-17 13:25:31.012410] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:41.959 [2024-11-17 13:25:31.012463] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:15:41.959 [2024-11-17 13:25:31.012763] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:41.959 NewBaseBdev 00:15:41.959 13:25:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.959 13:25:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:15:41.959 13:25:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:15:41.959 13:25:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:41.959 13:25:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:41.959 13:25:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:41.959 13:25:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:41.959 13:25:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:41.959 13:25:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.959 13:25:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.959 13:25:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.959 13:25:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:41.959 13:25:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.959 13:25:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.959 [ 00:15:41.959 { 00:15:41.959 "name": "NewBaseBdev", 00:15:41.959 "aliases": [ 00:15:41.959 "28ec0504-7431-46a6-9fc9-eda49e434512" 00:15:41.959 ], 00:15:41.959 "product_name": "Malloc disk", 00:15:41.959 "block_size": 512, 00:15:41.959 "num_blocks": 65536, 00:15:41.959 "uuid": "28ec0504-7431-46a6-9fc9-eda49e434512", 00:15:41.959 "assigned_rate_limits": { 00:15:41.959 "rw_ios_per_sec": 0, 00:15:41.959 "rw_mbytes_per_sec": 0, 00:15:41.959 "r_mbytes_per_sec": 0, 00:15:41.959 "w_mbytes_per_sec": 0 00:15:41.959 }, 00:15:41.959 "claimed": true, 00:15:41.959 "claim_type": "exclusive_write", 00:15:41.959 "zoned": false, 00:15:41.959 "supported_io_types": { 00:15:41.959 "read": true, 00:15:41.959 "write": true, 00:15:41.959 "unmap": true, 00:15:41.959 "flush": true, 00:15:41.959 "reset": true, 00:15:41.959 "nvme_admin": false, 00:15:41.959 "nvme_io": false, 00:15:41.959 "nvme_io_md": false, 00:15:41.959 "write_zeroes": true, 00:15:41.959 "zcopy": true, 00:15:41.959 "get_zone_info": false, 00:15:41.959 "zone_management": false, 00:15:41.959 "zone_append": false, 00:15:41.959 "compare": false, 00:15:41.959 "compare_and_write": false, 00:15:41.959 "abort": true, 00:15:41.959 "seek_hole": false, 00:15:41.959 "seek_data": false, 00:15:41.959 "copy": true, 00:15:41.959 "nvme_iov_md": false 00:15:41.959 }, 00:15:41.959 "memory_domains": [ 00:15:41.959 { 00:15:41.959 "dma_device_id": "system", 00:15:41.959 "dma_device_type": 1 00:15:41.959 }, 00:15:41.959 { 00:15:41.959 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:41.959 "dma_device_type": 2 00:15:41.959 } 00:15:41.959 ], 00:15:41.959 "driver_specific": {} 00:15:41.959 } 00:15:41.959 ] 00:15:41.959 13:25:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.959 13:25:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:41.959 13:25:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:15:41.959 13:25:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:41.959 13:25:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:41.960 13:25:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:41.960 13:25:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:41.960 13:25:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:41.960 13:25:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:41.960 13:25:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:41.960 13:25:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:41.960 13:25:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:41.960 13:25:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.960 13:25:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:41.960 13:25:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.960 13:25:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.960 13:25:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.960 13:25:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:41.960 "name": "Existed_Raid", 00:15:41.960 "uuid": "0ffb6007-8b5a-4f9b-a185-603382284b14", 00:15:41.960 "strip_size_kb": 64, 00:15:41.960 "state": "online", 00:15:41.960 "raid_level": "raid5f", 00:15:41.960 "superblock": false, 00:15:41.960 "num_base_bdevs": 4, 00:15:41.960 "num_base_bdevs_discovered": 4, 00:15:41.960 "num_base_bdevs_operational": 4, 00:15:41.960 "base_bdevs_list": [ 00:15:41.960 { 00:15:41.960 "name": "NewBaseBdev", 00:15:41.960 "uuid": "28ec0504-7431-46a6-9fc9-eda49e434512", 00:15:41.960 "is_configured": true, 00:15:41.960 "data_offset": 0, 00:15:41.960 "data_size": 65536 00:15:41.960 }, 00:15:41.960 { 00:15:41.960 "name": "BaseBdev2", 00:15:41.960 "uuid": "4babce34-7a37-4190-b320-08546b8b59b3", 00:15:41.960 "is_configured": true, 00:15:41.960 "data_offset": 0, 00:15:41.960 "data_size": 65536 00:15:41.960 }, 00:15:41.960 { 00:15:41.960 "name": "BaseBdev3", 00:15:41.960 "uuid": "ef7b4ee9-7a3c-41bc-b075-6883539709fe", 00:15:41.960 "is_configured": true, 00:15:41.960 "data_offset": 0, 00:15:41.960 "data_size": 65536 00:15:41.960 }, 00:15:41.960 { 00:15:41.960 "name": "BaseBdev4", 00:15:41.960 "uuid": "b50cc012-84c2-4f2b-84da-3eb4b9cc0609", 00:15:41.960 "is_configured": true, 00:15:41.960 "data_offset": 0, 00:15:41.960 "data_size": 65536 00:15:41.960 } 00:15:41.960 ] 00:15:41.960 }' 00:15:41.960 13:25:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:41.960 13:25:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.238 13:25:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:15:42.238 13:25:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:42.238 13:25:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:42.238 13:25:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:42.238 13:25:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:42.238 13:25:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:42.238 13:25:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:42.238 13:25:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.238 13:25:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.238 13:25:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:42.238 [2024-11-17 13:25:31.436075] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:42.238 13:25:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.510 13:25:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:42.510 "name": "Existed_Raid", 00:15:42.510 "aliases": [ 00:15:42.510 "0ffb6007-8b5a-4f9b-a185-603382284b14" 00:15:42.510 ], 00:15:42.510 "product_name": "Raid Volume", 00:15:42.510 "block_size": 512, 00:15:42.510 "num_blocks": 196608, 00:15:42.510 "uuid": "0ffb6007-8b5a-4f9b-a185-603382284b14", 00:15:42.510 "assigned_rate_limits": { 00:15:42.510 "rw_ios_per_sec": 0, 00:15:42.510 "rw_mbytes_per_sec": 0, 00:15:42.510 "r_mbytes_per_sec": 0, 00:15:42.510 "w_mbytes_per_sec": 0 00:15:42.510 }, 00:15:42.510 "claimed": false, 00:15:42.510 "zoned": false, 00:15:42.510 "supported_io_types": { 00:15:42.510 "read": true, 00:15:42.510 "write": true, 00:15:42.510 "unmap": false, 00:15:42.510 "flush": false, 00:15:42.510 "reset": true, 00:15:42.510 "nvme_admin": false, 00:15:42.510 "nvme_io": false, 00:15:42.510 "nvme_io_md": false, 00:15:42.510 "write_zeroes": true, 00:15:42.510 "zcopy": false, 00:15:42.510 "get_zone_info": false, 00:15:42.510 "zone_management": false, 00:15:42.510 "zone_append": false, 00:15:42.510 "compare": false, 00:15:42.510 "compare_and_write": false, 00:15:42.510 "abort": false, 00:15:42.510 "seek_hole": false, 00:15:42.510 "seek_data": false, 00:15:42.510 "copy": false, 00:15:42.510 "nvme_iov_md": false 00:15:42.510 }, 00:15:42.510 "driver_specific": { 00:15:42.510 "raid": { 00:15:42.510 "uuid": "0ffb6007-8b5a-4f9b-a185-603382284b14", 00:15:42.510 "strip_size_kb": 64, 00:15:42.510 "state": "online", 00:15:42.510 "raid_level": "raid5f", 00:15:42.510 "superblock": false, 00:15:42.510 "num_base_bdevs": 4, 00:15:42.510 "num_base_bdevs_discovered": 4, 00:15:42.510 "num_base_bdevs_operational": 4, 00:15:42.510 "base_bdevs_list": [ 00:15:42.510 { 00:15:42.510 "name": "NewBaseBdev", 00:15:42.510 "uuid": "28ec0504-7431-46a6-9fc9-eda49e434512", 00:15:42.510 "is_configured": true, 00:15:42.510 "data_offset": 0, 00:15:42.510 "data_size": 65536 00:15:42.510 }, 00:15:42.510 { 00:15:42.510 "name": "BaseBdev2", 00:15:42.510 "uuid": "4babce34-7a37-4190-b320-08546b8b59b3", 00:15:42.510 "is_configured": true, 00:15:42.510 "data_offset": 0, 00:15:42.510 "data_size": 65536 00:15:42.510 }, 00:15:42.510 { 00:15:42.510 "name": "BaseBdev3", 00:15:42.510 "uuid": "ef7b4ee9-7a3c-41bc-b075-6883539709fe", 00:15:42.510 "is_configured": true, 00:15:42.510 "data_offset": 0, 00:15:42.510 "data_size": 65536 00:15:42.510 }, 00:15:42.510 { 00:15:42.510 "name": "BaseBdev4", 00:15:42.510 "uuid": "b50cc012-84c2-4f2b-84da-3eb4b9cc0609", 00:15:42.510 "is_configured": true, 00:15:42.510 "data_offset": 0, 00:15:42.510 "data_size": 65536 00:15:42.510 } 00:15:42.510 ] 00:15:42.510 } 00:15:42.510 } 00:15:42.510 }' 00:15:42.510 13:25:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:42.510 13:25:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:15:42.510 BaseBdev2 00:15:42.510 BaseBdev3 00:15:42.510 BaseBdev4' 00:15:42.510 13:25:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:42.510 13:25:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:42.510 13:25:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:42.510 13:25:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:15:42.510 13:25:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:42.510 13:25:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.510 13:25:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.510 13:25:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.510 13:25:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:42.510 13:25:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:42.510 13:25:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:42.510 13:25:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:42.510 13:25:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.510 13:25:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.510 13:25:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:42.510 13:25:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.510 13:25:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:42.510 13:25:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:42.510 13:25:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:42.510 13:25:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:42.510 13:25:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:42.510 13:25:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.510 13:25:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.510 13:25:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.510 13:25:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:42.510 13:25:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:42.510 13:25:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:42.510 13:25:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:42.510 13:25:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:15:42.511 13:25:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.511 13:25:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.771 13:25:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.771 13:25:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:42.771 13:25:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:42.771 13:25:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:42.771 13:25:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.771 13:25:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.771 [2024-11-17 13:25:31.743355] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:42.771 [2024-11-17 13:25:31.743384] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:42.771 [2024-11-17 13:25:31.743448] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:42.771 [2024-11-17 13:25:31.743722] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:42.771 [2024-11-17 13:25:31.743732] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:15:42.771 13:25:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.771 13:25:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 82638 00:15:42.771 13:25:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 82638 ']' 00:15:42.771 13:25:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # kill -0 82638 00:15:42.771 13:25:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # uname 00:15:42.771 13:25:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:42.771 13:25:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82638 00:15:42.771 13:25:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:42.771 13:25:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:42.771 13:25:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82638' 00:15:42.771 killing process with pid 82638 00:15:42.771 13:25:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@973 -- # kill 82638 00:15:42.771 [2024-11-17 13:25:31.788705] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:42.771 13:25:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@978 -- # wait 82638 00:15:43.029 [2024-11-17 13:25:32.161674] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:44.409 13:25:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:15:44.409 00:15:44.409 real 0m11.306s 00:15:44.409 user 0m17.914s 00:15:44.409 sys 0m2.134s 00:15:44.409 13:25:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:44.409 13:25:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.409 ************************************ 00:15:44.409 END TEST raid5f_state_function_test 00:15:44.409 ************************************ 00:15:44.409 13:25:33 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 4 true 00:15:44.409 13:25:33 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:44.409 13:25:33 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:44.409 13:25:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:44.409 ************************************ 00:15:44.409 START TEST raid5f_state_function_test_sb 00:15:44.409 ************************************ 00:15:44.409 13:25:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 4 true 00:15:44.409 13:25:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:15:44.409 13:25:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:15:44.409 13:25:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:15:44.409 13:25:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:15:44.409 13:25:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:15:44.409 13:25:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:44.409 13:25:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:15:44.409 13:25:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:44.409 13:25:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:44.409 13:25:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:15:44.409 13:25:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:44.409 13:25:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:44.409 13:25:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:15:44.409 13:25:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:44.409 13:25:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:44.409 13:25:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:15:44.409 13:25:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:44.409 13:25:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:44.409 13:25:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:44.409 13:25:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:15:44.409 13:25:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:15:44.409 13:25:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:15:44.409 13:25:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:15:44.409 13:25:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:15:44.409 13:25:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:15:44.409 13:25:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:15:44.409 13:25:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:15:44.409 13:25:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:15:44.409 13:25:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:15:44.409 13:25:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=83304 00:15:44.409 13:25:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:15:44.409 13:25:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 83304' 00:15:44.409 Process raid pid: 83304 00:15:44.409 13:25:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 83304 00:15:44.409 13:25:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 83304 ']' 00:15:44.409 13:25:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:44.409 13:25:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:44.409 13:25:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:44.409 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:44.409 13:25:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:44.409 13:25:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.409 [2024-11-17 13:25:33.371521] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:15:44.409 [2024-11-17 13:25:33.371673] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:44.409 [2024-11-17 13:25:33.541801] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:44.669 [2024-11-17 13:25:33.644943] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:44.669 [2024-11-17 13:25:33.838731] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:44.669 [2024-11-17 13:25:33.838846] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:45.239 13:25:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:45.239 13:25:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:15:45.239 13:25:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:45.239 13:25:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.239 13:25:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.239 [2024-11-17 13:25:34.202303] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:45.239 [2024-11-17 13:25:34.202354] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:45.239 [2024-11-17 13:25:34.202363] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:45.239 [2024-11-17 13:25:34.202373] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:45.239 [2024-11-17 13:25:34.202378] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:45.239 [2024-11-17 13:25:34.202387] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:45.239 [2024-11-17 13:25:34.202392] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:45.239 [2024-11-17 13:25:34.202401] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:45.239 13:25:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.239 13:25:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:45.239 13:25:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:45.239 13:25:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:45.239 13:25:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:45.239 13:25:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:45.239 13:25:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:45.239 13:25:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:45.239 13:25:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:45.239 13:25:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:45.239 13:25:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:45.239 13:25:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:45.239 13:25:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:45.239 13:25:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.239 13:25:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.239 13:25:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.239 13:25:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:45.239 "name": "Existed_Raid", 00:15:45.239 "uuid": "4aca1338-2e5c-429f-b536-6493ab0da30b", 00:15:45.239 "strip_size_kb": 64, 00:15:45.239 "state": "configuring", 00:15:45.239 "raid_level": "raid5f", 00:15:45.239 "superblock": true, 00:15:45.239 "num_base_bdevs": 4, 00:15:45.239 "num_base_bdevs_discovered": 0, 00:15:45.239 "num_base_bdevs_operational": 4, 00:15:45.239 "base_bdevs_list": [ 00:15:45.239 { 00:15:45.239 "name": "BaseBdev1", 00:15:45.239 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:45.239 "is_configured": false, 00:15:45.239 "data_offset": 0, 00:15:45.239 "data_size": 0 00:15:45.239 }, 00:15:45.239 { 00:15:45.239 "name": "BaseBdev2", 00:15:45.239 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:45.239 "is_configured": false, 00:15:45.239 "data_offset": 0, 00:15:45.239 "data_size": 0 00:15:45.239 }, 00:15:45.239 { 00:15:45.239 "name": "BaseBdev3", 00:15:45.239 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:45.239 "is_configured": false, 00:15:45.239 "data_offset": 0, 00:15:45.239 "data_size": 0 00:15:45.239 }, 00:15:45.239 { 00:15:45.239 "name": "BaseBdev4", 00:15:45.239 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:45.239 "is_configured": false, 00:15:45.239 "data_offset": 0, 00:15:45.239 "data_size": 0 00:15:45.239 } 00:15:45.239 ] 00:15:45.239 }' 00:15:45.239 13:25:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:45.239 13:25:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.499 13:25:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:45.499 13:25:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.499 13:25:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.499 [2024-11-17 13:25:34.705504] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:45.499 [2024-11-17 13:25:34.705597] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:15:45.499 13:25:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.499 13:25:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:45.499 13:25:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.499 13:25:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.499 [2024-11-17 13:25:34.717485] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:45.499 [2024-11-17 13:25:34.717567] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:45.499 [2024-11-17 13:25:34.717594] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:45.499 [2024-11-17 13:25:34.717615] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:45.499 [2024-11-17 13:25:34.717632] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:45.499 [2024-11-17 13:25:34.717652] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:45.499 [2024-11-17 13:25:34.717669] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:45.499 [2024-11-17 13:25:34.717688] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:45.499 13:25:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.499 13:25:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:45.499 13:25:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.499 13:25:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.759 [2024-11-17 13:25:34.764399] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:45.759 BaseBdev1 00:15:45.759 13:25:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.759 13:25:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:15:45.759 13:25:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:45.759 13:25:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:45.759 13:25:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:45.759 13:25:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:45.759 13:25:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:45.759 13:25:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:45.759 13:25:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.759 13:25:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.759 13:25:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.759 13:25:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:45.759 13:25:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.759 13:25:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.759 [ 00:15:45.759 { 00:15:45.759 "name": "BaseBdev1", 00:15:45.759 "aliases": [ 00:15:45.759 "bd7a33c7-2aec-4f15-92fa-e58e498dd8e3" 00:15:45.759 ], 00:15:45.759 "product_name": "Malloc disk", 00:15:45.759 "block_size": 512, 00:15:45.759 "num_blocks": 65536, 00:15:45.759 "uuid": "bd7a33c7-2aec-4f15-92fa-e58e498dd8e3", 00:15:45.759 "assigned_rate_limits": { 00:15:45.759 "rw_ios_per_sec": 0, 00:15:45.759 "rw_mbytes_per_sec": 0, 00:15:45.759 "r_mbytes_per_sec": 0, 00:15:45.759 "w_mbytes_per_sec": 0 00:15:45.759 }, 00:15:45.759 "claimed": true, 00:15:45.759 "claim_type": "exclusive_write", 00:15:45.759 "zoned": false, 00:15:45.759 "supported_io_types": { 00:15:45.759 "read": true, 00:15:45.759 "write": true, 00:15:45.759 "unmap": true, 00:15:45.759 "flush": true, 00:15:45.759 "reset": true, 00:15:45.759 "nvme_admin": false, 00:15:45.759 "nvme_io": false, 00:15:45.759 "nvme_io_md": false, 00:15:45.759 "write_zeroes": true, 00:15:45.759 "zcopy": true, 00:15:45.759 "get_zone_info": false, 00:15:45.759 "zone_management": false, 00:15:45.759 "zone_append": false, 00:15:45.759 "compare": false, 00:15:45.759 "compare_and_write": false, 00:15:45.759 "abort": true, 00:15:45.759 "seek_hole": false, 00:15:45.759 "seek_data": false, 00:15:45.759 "copy": true, 00:15:45.759 "nvme_iov_md": false 00:15:45.759 }, 00:15:45.759 "memory_domains": [ 00:15:45.759 { 00:15:45.759 "dma_device_id": "system", 00:15:45.759 "dma_device_type": 1 00:15:45.759 }, 00:15:45.759 { 00:15:45.759 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:45.759 "dma_device_type": 2 00:15:45.759 } 00:15:45.759 ], 00:15:45.759 "driver_specific": {} 00:15:45.759 } 00:15:45.759 ] 00:15:45.759 13:25:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.760 13:25:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:45.760 13:25:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:45.760 13:25:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:45.760 13:25:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:45.760 13:25:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:45.760 13:25:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:45.760 13:25:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:45.760 13:25:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:45.760 13:25:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:45.760 13:25:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:45.760 13:25:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:45.760 13:25:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:45.760 13:25:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.760 13:25:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.760 13:25:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:45.760 13:25:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.760 13:25:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:45.760 "name": "Existed_Raid", 00:15:45.760 "uuid": "92c1ee86-3a58-41f1-a463-5fd5342f0264", 00:15:45.760 "strip_size_kb": 64, 00:15:45.760 "state": "configuring", 00:15:45.760 "raid_level": "raid5f", 00:15:45.760 "superblock": true, 00:15:45.760 "num_base_bdevs": 4, 00:15:45.760 "num_base_bdevs_discovered": 1, 00:15:45.760 "num_base_bdevs_operational": 4, 00:15:45.760 "base_bdevs_list": [ 00:15:45.760 { 00:15:45.760 "name": "BaseBdev1", 00:15:45.760 "uuid": "bd7a33c7-2aec-4f15-92fa-e58e498dd8e3", 00:15:45.760 "is_configured": true, 00:15:45.760 "data_offset": 2048, 00:15:45.760 "data_size": 63488 00:15:45.760 }, 00:15:45.760 { 00:15:45.760 "name": "BaseBdev2", 00:15:45.760 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:45.760 "is_configured": false, 00:15:45.760 "data_offset": 0, 00:15:45.760 "data_size": 0 00:15:45.760 }, 00:15:45.760 { 00:15:45.760 "name": "BaseBdev3", 00:15:45.760 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:45.760 "is_configured": false, 00:15:45.760 "data_offset": 0, 00:15:45.760 "data_size": 0 00:15:45.760 }, 00:15:45.760 { 00:15:45.760 "name": "BaseBdev4", 00:15:45.760 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:45.760 "is_configured": false, 00:15:45.760 "data_offset": 0, 00:15:45.760 "data_size": 0 00:15:45.760 } 00:15:45.760 ] 00:15:45.760 }' 00:15:45.760 13:25:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:45.760 13:25:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.329 13:25:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:46.329 13:25:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.329 13:25:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.329 [2024-11-17 13:25:35.255609] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:46.329 [2024-11-17 13:25:35.255667] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:15:46.329 13:25:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.329 13:25:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:46.329 13:25:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.329 13:25:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.329 [2024-11-17 13:25:35.267630] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:46.329 [2024-11-17 13:25:35.269367] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:46.329 [2024-11-17 13:25:35.269403] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:46.329 [2024-11-17 13:25:35.269416] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:46.329 [2024-11-17 13:25:35.269427] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:46.329 [2024-11-17 13:25:35.269434] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:46.329 [2024-11-17 13:25:35.269441] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:46.330 13:25:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.330 13:25:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:15:46.330 13:25:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:46.330 13:25:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:46.330 13:25:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:46.330 13:25:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:46.330 13:25:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:46.330 13:25:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:46.330 13:25:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:46.330 13:25:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:46.330 13:25:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:46.330 13:25:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:46.330 13:25:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:46.330 13:25:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:46.330 13:25:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:46.330 13:25:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.330 13:25:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.330 13:25:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.330 13:25:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:46.330 "name": "Existed_Raid", 00:15:46.330 "uuid": "786d4d90-f0fd-436c-8ff0-c1a1b2135f25", 00:15:46.330 "strip_size_kb": 64, 00:15:46.330 "state": "configuring", 00:15:46.330 "raid_level": "raid5f", 00:15:46.330 "superblock": true, 00:15:46.330 "num_base_bdevs": 4, 00:15:46.330 "num_base_bdevs_discovered": 1, 00:15:46.330 "num_base_bdevs_operational": 4, 00:15:46.330 "base_bdevs_list": [ 00:15:46.330 { 00:15:46.330 "name": "BaseBdev1", 00:15:46.330 "uuid": "bd7a33c7-2aec-4f15-92fa-e58e498dd8e3", 00:15:46.330 "is_configured": true, 00:15:46.330 "data_offset": 2048, 00:15:46.330 "data_size": 63488 00:15:46.330 }, 00:15:46.330 { 00:15:46.330 "name": "BaseBdev2", 00:15:46.330 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:46.330 "is_configured": false, 00:15:46.330 "data_offset": 0, 00:15:46.330 "data_size": 0 00:15:46.330 }, 00:15:46.330 { 00:15:46.330 "name": "BaseBdev3", 00:15:46.330 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:46.330 "is_configured": false, 00:15:46.330 "data_offset": 0, 00:15:46.330 "data_size": 0 00:15:46.330 }, 00:15:46.330 { 00:15:46.330 "name": "BaseBdev4", 00:15:46.330 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:46.330 "is_configured": false, 00:15:46.330 "data_offset": 0, 00:15:46.330 "data_size": 0 00:15:46.330 } 00:15:46.330 ] 00:15:46.330 }' 00:15:46.330 13:25:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:46.330 13:25:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.590 13:25:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:46.590 13:25:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.590 13:25:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.590 [2024-11-17 13:25:35.739617] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:46.590 BaseBdev2 00:15:46.590 13:25:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.590 13:25:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:15:46.590 13:25:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:46.590 13:25:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:46.590 13:25:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:46.590 13:25:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:46.590 13:25:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:46.590 13:25:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:46.590 13:25:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.590 13:25:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.590 13:25:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.590 13:25:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:46.590 13:25:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.590 13:25:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.590 [ 00:15:46.590 { 00:15:46.590 "name": "BaseBdev2", 00:15:46.590 "aliases": [ 00:15:46.590 "88174d72-c119-49af-8fac-f39af4c1a6f4" 00:15:46.590 ], 00:15:46.590 "product_name": "Malloc disk", 00:15:46.590 "block_size": 512, 00:15:46.590 "num_blocks": 65536, 00:15:46.590 "uuid": "88174d72-c119-49af-8fac-f39af4c1a6f4", 00:15:46.590 "assigned_rate_limits": { 00:15:46.590 "rw_ios_per_sec": 0, 00:15:46.590 "rw_mbytes_per_sec": 0, 00:15:46.590 "r_mbytes_per_sec": 0, 00:15:46.590 "w_mbytes_per_sec": 0 00:15:46.590 }, 00:15:46.590 "claimed": true, 00:15:46.590 "claim_type": "exclusive_write", 00:15:46.590 "zoned": false, 00:15:46.590 "supported_io_types": { 00:15:46.590 "read": true, 00:15:46.590 "write": true, 00:15:46.590 "unmap": true, 00:15:46.590 "flush": true, 00:15:46.590 "reset": true, 00:15:46.590 "nvme_admin": false, 00:15:46.590 "nvme_io": false, 00:15:46.590 "nvme_io_md": false, 00:15:46.590 "write_zeroes": true, 00:15:46.590 "zcopy": true, 00:15:46.590 "get_zone_info": false, 00:15:46.590 "zone_management": false, 00:15:46.590 "zone_append": false, 00:15:46.590 "compare": false, 00:15:46.590 "compare_and_write": false, 00:15:46.590 "abort": true, 00:15:46.590 "seek_hole": false, 00:15:46.590 "seek_data": false, 00:15:46.590 "copy": true, 00:15:46.590 "nvme_iov_md": false 00:15:46.590 }, 00:15:46.590 "memory_domains": [ 00:15:46.590 { 00:15:46.590 "dma_device_id": "system", 00:15:46.590 "dma_device_type": 1 00:15:46.590 }, 00:15:46.590 { 00:15:46.590 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:46.590 "dma_device_type": 2 00:15:46.590 } 00:15:46.590 ], 00:15:46.590 "driver_specific": {} 00:15:46.590 } 00:15:46.590 ] 00:15:46.590 13:25:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.590 13:25:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:46.590 13:25:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:46.590 13:25:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:46.590 13:25:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:46.590 13:25:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:46.590 13:25:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:46.590 13:25:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:46.590 13:25:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:46.590 13:25:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:46.590 13:25:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:46.590 13:25:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:46.590 13:25:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:46.590 13:25:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:46.590 13:25:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:46.590 13:25:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:46.590 13:25:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.590 13:25:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.590 13:25:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.850 13:25:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:46.850 "name": "Existed_Raid", 00:15:46.850 "uuid": "786d4d90-f0fd-436c-8ff0-c1a1b2135f25", 00:15:46.850 "strip_size_kb": 64, 00:15:46.850 "state": "configuring", 00:15:46.850 "raid_level": "raid5f", 00:15:46.850 "superblock": true, 00:15:46.850 "num_base_bdevs": 4, 00:15:46.850 "num_base_bdevs_discovered": 2, 00:15:46.850 "num_base_bdevs_operational": 4, 00:15:46.850 "base_bdevs_list": [ 00:15:46.850 { 00:15:46.850 "name": "BaseBdev1", 00:15:46.850 "uuid": "bd7a33c7-2aec-4f15-92fa-e58e498dd8e3", 00:15:46.850 "is_configured": true, 00:15:46.850 "data_offset": 2048, 00:15:46.850 "data_size": 63488 00:15:46.850 }, 00:15:46.850 { 00:15:46.850 "name": "BaseBdev2", 00:15:46.850 "uuid": "88174d72-c119-49af-8fac-f39af4c1a6f4", 00:15:46.850 "is_configured": true, 00:15:46.850 "data_offset": 2048, 00:15:46.850 "data_size": 63488 00:15:46.850 }, 00:15:46.850 { 00:15:46.850 "name": "BaseBdev3", 00:15:46.850 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:46.850 "is_configured": false, 00:15:46.850 "data_offset": 0, 00:15:46.850 "data_size": 0 00:15:46.850 }, 00:15:46.850 { 00:15:46.850 "name": "BaseBdev4", 00:15:46.850 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:46.850 "is_configured": false, 00:15:46.850 "data_offset": 0, 00:15:46.850 "data_size": 0 00:15:46.850 } 00:15:46.850 ] 00:15:46.850 }' 00:15:46.850 13:25:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:46.850 13:25:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.110 13:25:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:47.110 13:25:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.110 13:25:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.110 [2024-11-17 13:25:36.321988] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:47.110 BaseBdev3 00:15:47.110 13:25:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.110 13:25:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:15:47.110 13:25:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:47.110 13:25:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:47.110 13:25:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:47.110 13:25:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:47.110 13:25:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:47.110 13:25:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:47.110 13:25:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.110 13:25:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.370 13:25:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.370 13:25:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:47.370 13:25:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.370 13:25:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.370 [ 00:15:47.370 { 00:15:47.370 "name": "BaseBdev3", 00:15:47.370 "aliases": [ 00:15:47.370 "4db55bb5-6d7f-4140-9886-3e22b2486b74" 00:15:47.370 ], 00:15:47.370 "product_name": "Malloc disk", 00:15:47.370 "block_size": 512, 00:15:47.370 "num_blocks": 65536, 00:15:47.370 "uuid": "4db55bb5-6d7f-4140-9886-3e22b2486b74", 00:15:47.370 "assigned_rate_limits": { 00:15:47.370 "rw_ios_per_sec": 0, 00:15:47.370 "rw_mbytes_per_sec": 0, 00:15:47.370 "r_mbytes_per_sec": 0, 00:15:47.370 "w_mbytes_per_sec": 0 00:15:47.370 }, 00:15:47.370 "claimed": true, 00:15:47.370 "claim_type": "exclusive_write", 00:15:47.370 "zoned": false, 00:15:47.370 "supported_io_types": { 00:15:47.370 "read": true, 00:15:47.370 "write": true, 00:15:47.370 "unmap": true, 00:15:47.370 "flush": true, 00:15:47.370 "reset": true, 00:15:47.370 "nvme_admin": false, 00:15:47.370 "nvme_io": false, 00:15:47.370 "nvme_io_md": false, 00:15:47.370 "write_zeroes": true, 00:15:47.370 "zcopy": true, 00:15:47.370 "get_zone_info": false, 00:15:47.370 "zone_management": false, 00:15:47.370 "zone_append": false, 00:15:47.370 "compare": false, 00:15:47.370 "compare_and_write": false, 00:15:47.370 "abort": true, 00:15:47.370 "seek_hole": false, 00:15:47.370 "seek_data": false, 00:15:47.370 "copy": true, 00:15:47.370 "nvme_iov_md": false 00:15:47.370 }, 00:15:47.370 "memory_domains": [ 00:15:47.370 { 00:15:47.370 "dma_device_id": "system", 00:15:47.370 "dma_device_type": 1 00:15:47.370 }, 00:15:47.370 { 00:15:47.370 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:47.370 "dma_device_type": 2 00:15:47.370 } 00:15:47.370 ], 00:15:47.370 "driver_specific": {} 00:15:47.370 } 00:15:47.370 ] 00:15:47.370 13:25:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.370 13:25:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:47.370 13:25:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:47.370 13:25:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:47.370 13:25:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:47.370 13:25:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:47.370 13:25:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:47.370 13:25:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:47.370 13:25:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:47.370 13:25:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:47.370 13:25:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:47.370 13:25:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:47.370 13:25:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:47.370 13:25:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:47.370 13:25:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:47.370 13:25:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:47.370 13:25:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.370 13:25:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.370 13:25:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.370 13:25:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:47.370 "name": "Existed_Raid", 00:15:47.370 "uuid": "786d4d90-f0fd-436c-8ff0-c1a1b2135f25", 00:15:47.370 "strip_size_kb": 64, 00:15:47.370 "state": "configuring", 00:15:47.370 "raid_level": "raid5f", 00:15:47.370 "superblock": true, 00:15:47.370 "num_base_bdevs": 4, 00:15:47.370 "num_base_bdevs_discovered": 3, 00:15:47.370 "num_base_bdevs_operational": 4, 00:15:47.370 "base_bdevs_list": [ 00:15:47.370 { 00:15:47.370 "name": "BaseBdev1", 00:15:47.370 "uuid": "bd7a33c7-2aec-4f15-92fa-e58e498dd8e3", 00:15:47.370 "is_configured": true, 00:15:47.370 "data_offset": 2048, 00:15:47.370 "data_size": 63488 00:15:47.370 }, 00:15:47.370 { 00:15:47.370 "name": "BaseBdev2", 00:15:47.370 "uuid": "88174d72-c119-49af-8fac-f39af4c1a6f4", 00:15:47.370 "is_configured": true, 00:15:47.370 "data_offset": 2048, 00:15:47.370 "data_size": 63488 00:15:47.370 }, 00:15:47.370 { 00:15:47.370 "name": "BaseBdev3", 00:15:47.370 "uuid": "4db55bb5-6d7f-4140-9886-3e22b2486b74", 00:15:47.370 "is_configured": true, 00:15:47.370 "data_offset": 2048, 00:15:47.370 "data_size": 63488 00:15:47.370 }, 00:15:47.370 { 00:15:47.370 "name": "BaseBdev4", 00:15:47.370 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:47.370 "is_configured": false, 00:15:47.370 "data_offset": 0, 00:15:47.370 "data_size": 0 00:15:47.370 } 00:15:47.370 ] 00:15:47.370 }' 00:15:47.370 13:25:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:47.370 13:25:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.630 13:25:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:15:47.630 13:25:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.630 13:25:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.630 [2024-11-17 13:25:36.849202] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:47.630 [2024-11-17 13:25:36.849617] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:47.630 [2024-11-17 13:25:36.849684] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:47.630 [2024-11-17 13:25:36.849982] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:47.630 BaseBdev4 00:15:47.630 13:25:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.630 13:25:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:15:47.630 13:25:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:15:47.630 13:25:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:47.630 13:25:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:47.630 13:25:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:47.630 13:25:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:47.630 13:25:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:47.630 13:25:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.630 13:25:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.890 [2024-11-17 13:25:36.857467] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:47.890 [2024-11-17 13:25:36.857533] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:15:47.890 [2024-11-17 13:25:36.857844] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:47.890 13:25:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.890 13:25:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:47.890 13:25:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.890 13:25:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.890 [ 00:15:47.890 { 00:15:47.890 "name": "BaseBdev4", 00:15:47.890 "aliases": [ 00:15:47.890 "b2c1cc14-300b-4c01-9dbb-c0206e54ba56" 00:15:47.890 ], 00:15:47.890 "product_name": "Malloc disk", 00:15:47.890 "block_size": 512, 00:15:47.890 "num_blocks": 65536, 00:15:47.890 "uuid": "b2c1cc14-300b-4c01-9dbb-c0206e54ba56", 00:15:47.890 "assigned_rate_limits": { 00:15:47.890 "rw_ios_per_sec": 0, 00:15:47.890 "rw_mbytes_per_sec": 0, 00:15:47.890 "r_mbytes_per_sec": 0, 00:15:47.890 "w_mbytes_per_sec": 0 00:15:47.890 }, 00:15:47.890 "claimed": true, 00:15:47.890 "claim_type": "exclusive_write", 00:15:47.890 "zoned": false, 00:15:47.890 "supported_io_types": { 00:15:47.890 "read": true, 00:15:47.890 "write": true, 00:15:47.890 "unmap": true, 00:15:47.890 "flush": true, 00:15:47.890 "reset": true, 00:15:47.890 "nvme_admin": false, 00:15:47.890 "nvme_io": false, 00:15:47.890 "nvme_io_md": false, 00:15:47.890 "write_zeroes": true, 00:15:47.890 "zcopy": true, 00:15:47.890 "get_zone_info": false, 00:15:47.890 "zone_management": false, 00:15:47.890 "zone_append": false, 00:15:47.890 "compare": false, 00:15:47.890 "compare_and_write": false, 00:15:47.890 "abort": true, 00:15:47.890 "seek_hole": false, 00:15:47.890 "seek_data": false, 00:15:47.890 "copy": true, 00:15:47.890 "nvme_iov_md": false 00:15:47.890 }, 00:15:47.890 "memory_domains": [ 00:15:47.890 { 00:15:47.890 "dma_device_id": "system", 00:15:47.890 "dma_device_type": 1 00:15:47.890 }, 00:15:47.890 { 00:15:47.890 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:47.890 "dma_device_type": 2 00:15:47.890 } 00:15:47.890 ], 00:15:47.890 "driver_specific": {} 00:15:47.890 } 00:15:47.890 ] 00:15:47.890 13:25:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.890 13:25:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:47.890 13:25:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:47.890 13:25:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:47.890 13:25:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:15:47.890 13:25:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:47.890 13:25:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:47.890 13:25:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:47.890 13:25:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:47.890 13:25:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:47.890 13:25:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:47.890 13:25:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:47.890 13:25:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:47.890 13:25:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:47.890 13:25:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:47.890 13:25:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.890 13:25:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.890 13:25:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:47.890 13:25:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.890 13:25:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:47.890 "name": "Existed_Raid", 00:15:47.890 "uuid": "786d4d90-f0fd-436c-8ff0-c1a1b2135f25", 00:15:47.890 "strip_size_kb": 64, 00:15:47.890 "state": "online", 00:15:47.890 "raid_level": "raid5f", 00:15:47.890 "superblock": true, 00:15:47.890 "num_base_bdevs": 4, 00:15:47.890 "num_base_bdevs_discovered": 4, 00:15:47.890 "num_base_bdevs_operational": 4, 00:15:47.890 "base_bdevs_list": [ 00:15:47.890 { 00:15:47.890 "name": "BaseBdev1", 00:15:47.891 "uuid": "bd7a33c7-2aec-4f15-92fa-e58e498dd8e3", 00:15:47.891 "is_configured": true, 00:15:47.891 "data_offset": 2048, 00:15:47.891 "data_size": 63488 00:15:47.891 }, 00:15:47.891 { 00:15:47.891 "name": "BaseBdev2", 00:15:47.891 "uuid": "88174d72-c119-49af-8fac-f39af4c1a6f4", 00:15:47.891 "is_configured": true, 00:15:47.891 "data_offset": 2048, 00:15:47.891 "data_size": 63488 00:15:47.891 }, 00:15:47.891 { 00:15:47.891 "name": "BaseBdev3", 00:15:47.891 "uuid": "4db55bb5-6d7f-4140-9886-3e22b2486b74", 00:15:47.891 "is_configured": true, 00:15:47.891 "data_offset": 2048, 00:15:47.891 "data_size": 63488 00:15:47.891 }, 00:15:47.891 { 00:15:47.891 "name": "BaseBdev4", 00:15:47.891 "uuid": "b2c1cc14-300b-4c01-9dbb-c0206e54ba56", 00:15:47.891 "is_configured": true, 00:15:47.891 "data_offset": 2048, 00:15:47.891 "data_size": 63488 00:15:47.891 } 00:15:47.891 ] 00:15:47.891 }' 00:15:47.891 13:25:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:47.891 13:25:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.151 13:25:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:15:48.151 13:25:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:48.151 13:25:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:48.151 13:25:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:48.151 13:25:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:15:48.151 13:25:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:48.151 13:25:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:48.151 13:25:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:48.151 13:25:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.151 13:25:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.151 [2024-11-17 13:25:37.332915] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:48.151 13:25:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.151 13:25:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:48.151 "name": "Existed_Raid", 00:15:48.151 "aliases": [ 00:15:48.151 "786d4d90-f0fd-436c-8ff0-c1a1b2135f25" 00:15:48.151 ], 00:15:48.151 "product_name": "Raid Volume", 00:15:48.151 "block_size": 512, 00:15:48.151 "num_blocks": 190464, 00:15:48.151 "uuid": "786d4d90-f0fd-436c-8ff0-c1a1b2135f25", 00:15:48.151 "assigned_rate_limits": { 00:15:48.151 "rw_ios_per_sec": 0, 00:15:48.151 "rw_mbytes_per_sec": 0, 00:15:48.151 "r_mbytes_per_sec": 0, 00:15:48.151 "w_mbytes_per_sec": 0 00:15:48.151 }, 00:15:48.151 "claimed": false, 00:15:48.151 "zoned": false, 00:15:48.151 "supported_io_types": { 00:15:48.151 "read": true, 00:15:48.151 "write": true, 00:15:48.151 "unmap": false, 00:15:48.151 "flush": false, 00:15:48.151 "reset": true, 00:15:48.151 "nvme_admin": false, 00:15:48.151 "nvme_io": false, 00:15:48.151 "nvme_io_md": false, 00:15:48.151 "write_zeroes": true, 00:15:48.151 "zcopy": false, 00:15:48.151 "get_zone_info": false, 00:15:48.151 "zone_management": false, 00:15:48.151 "zone_append": false, 00:15:48.151 "compare": false, 00:15:48.151 "compare_and_write": false, 00:15:48.151 "abort": false, 00:15:48.151 "seek_hole": false, 00:15:48.151 "seek_data": false, 00:15:48.151 "copy": false, 00:15:48.151 "nvme_iov_md": false 00:15:48.151 }, 00:15:48.151 "driver_specific": { 00:15:48.151 "raid": { 00:15:48.151 "uuid": "786d4d90-f0fd-436c-8ff0-c1a1b2135f25", 00:15:48.151 "strip_size_kb": 64, 00:15:48.151 "state": "online", 00:15:48.151 "raid_level": "raid5f", 00:15:48.151 "superblock": true, 00:15:48.151 "num_base_bdevs": 4, 00:15:48.151 "num_base_bdevs_discovered": 4, 00:15:48.151 "num_base_bdevs_operational": 4, 00:15:48.151 "base_bdevs_list": [ 00:15:48.151 { 00:15:48.151 "name": "BaseBdev1", 00:15:48.151 "uuid": "bd7a33c7-2aec-4f15-92fa-e58e498dd8e3", 00:15:48.151 "is_configured": true, 00:15:48.151 "data_offset": 2048, 00:15:48.151 "data_size": 63488 00:15:48.151 }, 00:15:48.151 { 00:15:48.151 "name": "BaseBdev2", 00:15:48.151 "uuid": "88174d72-c119-49af-8fac-f39af4c1a6f4", 00:15:48.151 "is_configured": true, 00:15:48.151 "data_offset": 2048, 00:15:48.151 "data_size": 63488 00:15:48.151 }, 00:15:48.151 { 00:15:48.151 "name": "BaseBdev3", 00:15:48.151 "uuid": "4db55bb5-6d7f-4140-9886-3e22b2486b74", 00:15:48.151 "is_configured": true, 00:15:48.151 "data_offset": 2048, 00:15:48.151 "data_size": 63488 00:15:48.151 }, 00:15:48.151 { 00:15:48.151 "name": "BaseBdev4", 00:15:48.151 "uuid": "b2c1cc14-300b-4c01-9dbb-c0206e54ba56", 00:15:48.151 "is_configured": true, 00:15:48.151 "data_offset": 2048, 00:15:48.151 "data_size": 63488 00:15:48.151 } 00:15:48.151 ] 00:15:48.151 } 00:15:48.151 } 00:15:48.151 }' 00:15:48.151 13:25:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:48.411 13:25:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:15:48.411 BaseBdev2 00:15:48.411 BaseBdev3 00:15:48.411 BaseBdev4' 00:15:48.411 13:25:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:48.411 13:25:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:48.411 13:25:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:48.411 13:25:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:15:48.411 13:25:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:48.411 13:25:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.411 13:25:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.411 13:25:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.411 13:25:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:48.411 13:25:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:48.411 13:25:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:48.411 13:25:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:48.411 13:25:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.411 13:25:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.411 13:25:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:48.411 13:25:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.411 13:25:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:48.411 13:25:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:48.411 13:25:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:48.411 13:25:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:48.411 13:25:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:48.411 13:25:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.411 13:25:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.411 13:25:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.411 13:25:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:48.411 13:25:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:48.411 13:25:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:48.411 13:25:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:48.411 13:25:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:15:48.411 13:25:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.411 13:25:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.411 13:25:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.411 13:25:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:48.411 13:25:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:48.411 13:25:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:48.411 13:25:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.411 13:25:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.411 [2024-11-17 13:25:37.624280] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:48.675 13:25:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.675 13:25:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:15:48.675 13:25:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:15:48.675 13:25:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:48.675 13:25:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:15:48.675 13:25:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:15:48.675 13:25:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:15:48.675 13:25:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:48.675 13:25:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:48.675 13:25:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:48.675 13:25:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:48.675 13:25:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:48.675 13:25:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:48.675 13:25:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:48.675 13:25:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:48.675 13:25:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:48.675 13:25:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:48.675 13:25:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.675 13:25:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.675 13:25:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:48.675 13:25:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.675 13:25:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:48.675 "name": "Existed_Raid", 00:15:48.675 "uuid": "786d4d90-f0fd-436c-8ff0-c1a1b2135f25", 00:15:48.675 "strip_size_kb": 64, 00:15:48.675 "state": "online", 00:15:48.675 "raid_level": "raid5f", 00:15:48.675 "superblock": true, 00:15:48.675 "num_base_bdevs": 4, 00:15:48.675 "num_base_bdevs_discovered": 3, 00:15:48.675 "num_base_bdevs_operational": 3, 00:15:48.675 "base_bdevs_list": [ 00:15:48.675 { 00:15:48.675 "name": null, 00:15:48.675 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:48.675 "is_configured": false, 00:15:48.675 "data_offset": 0, 00:15:48.675 "data_size": 63488 00:15:48.675 }, 00:15:48.675 { 00:15:48.675 "name": "BaseBdev2", 00:15:48.675 "uuid": "88174d72-c119-49af-8fac-f39af4c1a6f4", 00:15:48.675 "is_configured": true, 00:15:48.675 "data_offset": 2048, 00:15:48.675 "data_size": 63488 00:15:48.675 }, 00:15:48.675 { 00:15:48.675 "name": "BaseBdev3", 00:15:48.675 "uuid": "4db55bb5-6d7f-4140-9886-3e22b2486b74", 00:15:48.675 "is_configured": true, 00:15:48.675 "data_offset": 2048, 00:15:48.675 "data_size": 63488 00:15:48.675 }, 00:15:48.675 { 00:15:48.675 "name": "BaseBdev4", 00:15:48.675 "uuid": "b2c1cc14-300b-4c01-9dbb-c0206e54ba56", 00:15:48.675 "is_configured": true, 00:15:48.675 "data_offset": 2048, 00:15:48.675 "data_size": 63488 00:15:48.675 } 00:15:48.675 ] 00:15:48.675 }' 00:15:48.676 13:25:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:48.676 13:25:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.936 13:25:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:15:48.936 13:25:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:48.936 13:25:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:48.936 13:25:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:48.936 13:25:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.936 13:25:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.936 13:25:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.195 13:25:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:49.195 13:25:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:49.195 13:25:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:15:49.195 13:25:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.195 13:25:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.195 [2024-11-17 13:25:38.170860] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:49.195 [2024-11-17 13:25:38.171081] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:49.195 [2024-11-17 13:25:38.261158] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:49.195 13:25:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.195 13:25:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:49.195 13:25:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:49.195 13:25:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.195 13:25:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:49.195 13:25:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.195 13:25:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.195 13:25:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.195 13:25:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:49.195 13:25:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:49.195 13:25:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:15:49.195 13:25:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.195 13:25:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.195 [2024-11-17 13:25:38.317087] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:49.195 13:25:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.195 13:25:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:49.195 13:25:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:49.195 13:25:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.195 13:25:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:49.195 13:25:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.195 13:25:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.455 13:25:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.455 13:25:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:49.455 13:25:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:49.455 13:25:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:15:49.455 13:25:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.455 13:25:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.455 [2024-11-17 13:25:38.468425] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:15:49.455 [2024-11-17 13:25:38.468520] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:15:49.455 13:25:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.455 13:25:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:49.455 13:25:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:49.455 13:25:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.455 13:25:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.455 13:25:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.455 13:25:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:15:49.455 13:25:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.455 13:25:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:15:49.455 13:25:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:15:49.455 13:25:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:15:49.455 13:25:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:15:49.455 13:25:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:49.455 13:25:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:49.455 13:25:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.455 13:25:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.455 BaseBdev2 00:15:49.455 13:25:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.455 13:25:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:15:49.455 13:25:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:49.455 13:25:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:49.455 13:25:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:49.455 13:25:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:49.455 13:25:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:49.455 13:25:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:49.455 13:25:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.455 13:25:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.455 13:25:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.455 13:25:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:49.455 13:25:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.455 13:25:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.716 [ 00:15:49.716 { 00:15:49.716 "name": "BaseBdev2", 00:15:49.716 "aliases": [ 00:15:49.716 "993dafa5-6cd9-4436-ac9b-daa896739905" 00:15:49.716 ], 00:15:49.716 "product_name": "Malloc disk", 00:15:49.716 "block_size": 512, 00:15:49.716 "num_blocks": 65536, 00:15:49.716 "uuid": "993dafa5-6cd9-4436-ac9b-daa896739905", 00:15:49.716 "assigned_rate_limits": { 00:15:49.716 "rw_ios_per_sec": 0, 00:15:49.716 "rw_mbytes_per_sec": 0, 00:15:49.716 "r_mbytes_per_sec": 0, 00:15:49.716 "w_mbytes_per_sec": 0 00:15:49.716 }, 00:15:49.716 "claimed": false, 00:15:49.716 "zoned": false, 00:15:49.716 "supported_io_types": { 00:15:49.716 "read": true, 00:15:49.716 "write": true, 00:15:49.716 "unmap": true, 00:15:49.716 "flush": true, 00:15:49.716 "reset": true, 00:15:49.716 "nvme_admin": false, 00:15:49.716 "nvme_io": false, 00:15:49.716 "nvme_io_md": false, 00:15:49.716 "write_zeroes": true, 00:15:49.716 "zcopy": true, 00:15:49.716 "get_zone_info": false, 00:15:49.716 "zone_management": false, 00:15:49.716 "zone_append": false, 00:15:49.716 "compare": false, 00:15:49.716 "compare_and_write": false, 00:15:49.716 "abort": true, 00:15:49.716 "seek_hole": false, 00:15:49.716 "seek_data": false, 00:15:49.716 "copy": true, 00:15:49.716 "nvme_iov_md": false 00:15:49.716 }, 00:15:49.716 "memory_domains": [ 00:15:49.716 { 00:15:49.716 "dma_device_id": "system", 00:15:49.716 "dma_device_type": 1 00:15:49.716 }, 00:15:49.716 { 00:15:49.716 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:49.716 "dma_device_type": 2 00:15:49.716 } 00:15:49.716 ], 00:15:49.716 "driver_specific": {} 00:15:49.716 } 00:15:49.716 ] 00:15:49.716 13:25:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.716 13:25:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:49.716 13:25:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:49.716 13:25:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:49.716 13:25:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:49.716 13:25:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.716 13:25:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.716 BaseBdev3 00:15:49.716 13:25:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.716 13:25:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:15:49.716 13:25:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:49.716 13:25:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:49.716 13:25:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:49.716 13:25:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:49.716 13:25:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:49.716 13:25:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:49.716 13:25:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.716 13:25:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.716 13:25:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.716 13:25:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:49.716 13:25:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.716 13:25:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.716 [ 00:15:49.716 { 00:15:49.716 "name": "BaseBdev3", 00:15:49.716 "aliases": [ 00:15:49.716 "5223f90b-a20a-435d-a770-d3dc8cb8803d" 00:15:49.716 ], 00:15:49.716 "product_name": "Malloc disk", 00:15:49.716 "block_size": 512, 00:15:49.716 "num_blocks": 65536, 00:15:49.716 "uuid": "5223f90b-a20a-435d-a770-d3dc8cb8803d", 00:15:49.716 "assigned_rate_limits": { 00:15:49.716 "rw_ios_per_sec": 0, 00:15:49.716 "rw_mbytes_per_sec": 0, 00:15:49.716 "r_mbytes_per_sec": 0, 00:15:49.716 "w_mbytes_per_sec": 0 00:15:49.716 }, 00:15:49.716 "claimed": false, 00:15:49.716 "zoned": false, 00:15:49.716 "supported_io_types": { 00:15:49.716 "read": true, 00:15:49.716 "write": true, 00:15:49.716 "unmap": true, 00:15:49.716 "flush": true, 00:15:49.716 "reset": true, 00:15:49.716 "nvme_admin": false, 00:15:49.716 "nvme_io": false, 00:15:49.716 "nvme_io_md": false, 00:15:49.716 "write_zeroes": true, 00:15:49.716 "zcopy": true, 00:15:49.716 "get_zone_info": false, 00:15:49.716 "zone_management": false, 00:15:49.716 "zone_append": false, 00:15:49.716 "compare": false, 00:15:49.716 "compare_and_write": false, 00:15:49.716 "abort": true, 00:15:49.716 "seek_hole": false, 00:15:49.716 "seek_data": false, 00:15:49.716 "copy": true, 00:15:49.716 "nvme_iov_md": false 00:15:49.716 }, 00:15:49.716 "memory_domains": [ 00:15:49.716 { 00:15:49.716 "dma_device_id": "system", 00:15:49.716 "dma_device_type": 1 00:15:49.716 }, 00:15:49.716 { 00:15:49.716 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:49.716 "dma_device_type": 2 00:15:49.716 } 00:15:49.716 ], 00:15:49.716 "driver_specific": {} 00:15:49.716 } 00:15:49.716 ] 00:15:49.716 13:25:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.716 13:25:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:49.716 13:25:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:49.716 13:25:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:49.716 13:25:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:15:49.716 13:25:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.716 13:25:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.716 BaseBdev4 00:15:49.716 13:25:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.716 13:25:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:15:49.716 13:25:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:15:49.716 13:25:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:49.716 13:25:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:49.716 13:25:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:49.716 13:25:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:49.716 13:25:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:49.716 13:25:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.716 13:25:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.716 13:25:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.716 13:25:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:49.717 13:25:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.717 13:25:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.717 [ 00:15:49.717 { 00:15:49.717 "name": "BaseBdev4", 00:15:49.717 "aliases": [ 00:15:49.717 "7db914f6-291b-4409-84f8-3d09846bdec7" 00:15:49.717 ], 00:15:49.717 "product_name": "Malloc disk", 00:15:49.717 "block_size": 512, 00:15:49.717 "num_blocks": 65536, 00:15:49.717 "uuid": "7db914f6-291b-4409-84f8-3d09846bdec7", 00:15:49.717 "assigned_rate_limits": { 00:15:49.717 "rw_ios_per_sec": 0, 00:15:49.717 "rw_mbytes_per_sec": 0, 00:15:49.717 "r_mbytes_per_sec": 0, 00:15:49.717 "w_mbytes_per_sec": 0 00:15:49.717 }, 00:15:49.717 "claimed": false, 00:15:49.717 "zoned": false, 00:15:49.717 "supported_io_types": { 00:15:49.717 "read": true, 00:15:49.717 "write": true, 00:15:49.717 "unmap": true, 00:15:49.717 "flush": true, 00:15:49.717 "reset": true, 00:15:49.717 "nvme_admin": false, 00:15:49.717 "nvme_io": false, 00:15:49.717 "nvme_io_md": false, 00:15:49.717 "write_zeroes": true, 00:15:49.717 "zcopy": true, 00:15:49.717 "get_zone_info": false, 00:15:49.717 "zone_management": false, 00:15:49.717 "zone_append": false, 00:15:49.717 "compare": false, 00:15:49.717 "compare_and_write": false, 00:15:49.717 "abort": true, 00:15:49.717 "seek_hole": false, 00:15:49.717 "seek_data": false, 00:15:49.717 "copy": true, 00:15:49.717 "nvme_iov_md": false 00:15:49.717 }, 00:15:49.717 "memory_domains": [ 00:15:49.717 { 00:15:49.717 "dma_device_id": "system", 00:15:49.717 "dma_device_type": 1 00:15:49.717 }, 00:15:49.717 { 00:15:49.717 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:49.717 "dma_device_type": 2 00:15:49.717 } 00:15:49.717 ], 00:15:49.717 "driver_specific": {} 00:15:49.717 } 00:15:49.717 ] 00:15:49.717 13:25:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.717 13:25:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:49.717 13:25:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:49.717 13:25:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:49.717 13:25:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:49.717 13:25:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.717 13:25:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.717 [2024-11-17 13:25:38.858352] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:49.717 [2024-11-17 13:25:38.858458] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:49.717 [2024-11-17 13:25:38.858499] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:49.717 [2024-11-17 13:25:38.860284] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:49.717 [2024-11-17 13:25:38.860373] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:49.717 13:25:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.717 13:25:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:49.717 13:25:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:49.717 13:25:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:49.717 13:25:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:49.717 13:25:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:49.717 13:25:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:49.717 13:25:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:49.717 13:25:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:49.717 13:25:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:49.717 13:25:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:49.717 13:25:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.717 13:25:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.717 13:25:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:49.717 13:25:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.717 13:25:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.717 13:25:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:49.717 "name": "Existed_Raid", 00:15:49.717 "uuid": "034715d4-1439-49e3-a1dc-7dbdfa6cf4b4", 00:15:49.717 "strip_size_kb": 64, 00:15:49.717 "state": "configuring", 00:15:49.717 "raid_level": "raid5f", 00:15:49.717 "superblock": true, 00:15:49.717 "num_base_bdevs": 4, 00:15:49.717 "num_base_bdevs_discovered": 3, 00:15:49.717 "num_base_bdevs_operational": 4, 00:15:49.717 "base_bdevs_list": [ 00:15:49.717 { 00:15:49.717 "name": "BaseBdev1", 00:15:49.717 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:49.717 "is_configured": false, 00:15:49.717 "data_offset": 0, 00:15:49.717 "data_size": 0 00:15:49.717 }, 00:15:49.717 { 00:15:49.717 "name": "BaseBdev2", 00:15:49.717 "uuid": "993dafa5-6cd9-4436-ac9b-daa896739905", 00:15:49.717 "is_configured": true, 00:15:49.717 "data_offset": 2048, 00:15:49.717 "data_size": 63488 00:15:49.717 }, 00:15:49.717 { 00:15:49.717 "name": "BaseBdev3", 00:15:49.717 "uuid": "5223f90b-a20a-435d-a770-d3dc8cb8803d", 00:15:49.717 "is_configured": true, 00:15:49.717 "data_offset": 2048, 00:15:49.717 "data_size": 63488 00:15:49.717 }, 00:15:49.717 { 00:15:49.717 "name": "BaseBdev4", 00:15:49.717 "uuid": "7db914f6-291b-4409-84f8-3d09846bdec7", 00:15:49.717 "is_configured": true, 00:15:49.717 "data_offset": 2048, 00:15:49.717 "data_size": 63488 00:15:49.717 } 00:15:49.717 ] 00:15:49.717 }' 00:15:49.717 13:25:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:49.717 13:25:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.287 13:25:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:50.287 13:25:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.287 13:25:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.287 [2024-11-17 13:25:39.293599] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:50.287 13:25:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.287 13:25:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:50.287 13:25:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:50.287 13:25:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:50.287 13:25:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:50.287 13:25:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:50.287 13:25:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:50.287 13:25:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:50.287 13:25:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:50.287 13:25:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:50.287 13:25:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:50.287 13:25:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:50.287 13:25:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.287 13:25:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.287 13:25:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.287 13:25:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.287 13:25:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:50.287 "name": "Existed_Raid", 00:15:50.287 "uuid": "034715d4-1439-49e3-a1dc-7dbdfa6cf4b4", 00:15:50.287 "strip_size_kb": 64, 00:15:50.287 "state": "configuring", 00:15:50.287 "raid_level": "raid5f", 00:15:50.287 "superblock": true, 00:15:50.287 "num_base_bdevs": 4, 00:15:50.287 "num_base_bdevs_discovered": 2, 00:15:50.287 "num_base_bdevs_operational": 4, 00:15:50.287 "base_bdevs_list": [ 00:15:50.287 { 00:15:50.287 "name": "BaseBdev1", 00:15:50.287 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:50.287 "is_configured": false, 00:15:50.287 "data_offset": 0, 00:15:50.287 "data_size": 0 00:15:50.287 }, 00:15:50.287 { 00:15:50.287 "name": null, 00:15:50.287 "uuid": "993dafa5-6cd9-4436-ac9b-daa896739905", 00:15:50.287 "is_configured": false, 00:15:50.287 "data_offset": 0, 00:15:50.287 "data_size": 63488 00:15:50.287 }, 00:15:50.287 { 00:15:50.287 "name": "BaseBdev3", 00:15:50.287 "uuid": "5223f90b-a20a-435d-a770-d3dc8cb8803d", 00:15:50.287 "is_configured": true, 00:15:50.287 "data_offset": 2048, 00:15:50.287 "data_size": 63488 00:15:50.287 }, 00:15:50.287 { 00:15:50.287 "name": "BaseBdev4", 00:15:50.287 "uuid": "7db914f6-291b-4409-84f8-3d09846bdec7", 00:15:50.287 "is_configured": true, 00:15:50.287 "data_offset": 2048, 00:15:50.287 "data_size": 63488 00:15:50.287 } 00:15:50.287 ] 00:15:50.287 }' 00:15:50.287 13:25:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:50.287 13:25:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.547 13:25:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.547 13:25:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.547 13:25:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.547 13:25:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:50.547 13:25:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.547 13:25:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:15:50.547 13:25:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:50.547 13:25:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.547 13:25:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.547 [2024-11-17 13:25:39.725146] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:50.547 BaseBdev1 00:15:50.547 13:25:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.547 13:25:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:15:50.547 13:25:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:50.548 13:25:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:50.548 13:25:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:50.548 13:25:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:50.548 13:25:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:50.548 13:25:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:50.548 13:25:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.548 13:25:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.548 13:25:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.548 13:25:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:50.548 13:25:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.548 13:25:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.548 [ 00:15:50.548 { 00:15:50.548 "name": "BaseBdev1", 00:15:50.548 "aliases": [ 00:15:50.548 "32cf6315-abc6-4a01-ae51-958b8a2c3701" 00:15:50.548 ], 00:15:50.548 "product_name": "Malloc disk", 00:15:50.548 "block_size": 512, 00:15:50.548 "num_blocks": 65536, 00:15:50.548 "uuid": "32cf6315-abc6-4a01-ae51-958b8a2c3701", 00:15:50.548 "assigned_rate_limits": { 00:15:50.548 "rw_ios_per_sec": 0, 00:15:50.548 "rw_mbytes_per_sec": 0, 00:15:50.548 "r_mbytes_per_sec": 0, 00:15:50.548 "w_mbytes_per_sec": 0 00:15:50.548 }, 00:15:50.548 "claimed": true, 00:15:50.548 "claim_type": "exclusive_write", 00:15:50.548 "zoned": false, 00:15:50.548 "supported_io_types": { 00:15:50.548 "read": true, 00:15:50.548 "write": true, 00:15:50.548 "unmap": true, 00:15:50.548 "flush": true, 00:15:50.548 "reset": true, 00:15:50.548 "nvme_admin": false, 00:15:50.548 "nvme_io": false, 00:15:50.548 "nvme_io_md": false, 00:15:50.548 "write_zeroes": true, 00:15:50.548 "zcopy": true, 00:15:50.548 "get_zone_info": false, 00:15:50.548 "zone_management": false, 00:15:50.548 "zone_append": false, 00:15:50.548 "compare": false, 00:15:50.548 "compare_and_write": false, 00:15:50.548 "abort": true, 00:15:50.548 "seek_hole": false, 00:15:50.548 "seek_data": false, 00:15:50.548 "copy": true, 00:15:50.548 "nvme_iov_md": false 00:15:50.548 }, 00:15:50.548 "memory_domains": [ 00:15:50.548 { 00:15:50.548 "dma_device_id": "system", 00:15:50.548 "dma_device_type": 1 00:15:50.548 }, 00:15:50.548 { 00:15:50.548 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:50.548 "dma_device_type": 2 00:15:50.548 } 00:15:50.548 ], 00:15:50.548 "driver_specific": {} 00:15:50.548 } 00:15:50.548 ] 00:15:50.548 13:25:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.548 13:25:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:50.548 13:25:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:50.548 13:25:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:50.548 13:25:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:50.548 13:25:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:50.548 13:25:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:50.548 13:25:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:50.548 13:25:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:50.548 13:25:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:50.548 13:25:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:50.548 13:25:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:50.808 13:25:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.808 13:25:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:50.808 13:25:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.808 13:25:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.808 13:25:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.808 13:25:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:50.808 "name": "Existed_Raid", 00:15:50.808 "uuid": "034715d4-1439-49e3-a1dc-7dbdfa6cf4b4", 00:15:50.808 "strip_size_kb": 64, 00:15:50.808 "state": "configuring", 00:15:50.808 "raid_level": "raid5f", 00:15:50.808 "superblock": true, 00:15:50.808 "num_base_bdevs": 4, 00:15:50.808 "num_base_bdevs_discovered": 3, 00:15:50.808 "num_base_bdevs_operational": 4, 00:15:50.808 "base_bdevs_list": [ 00:15:50.808 { 00:15:50.808 "name": "BaseBdev1", 00:15:50.808 "uuid": "32cf6315-abc6-4a01-ae51-958b8a2c3701", 00:15:50.808 "is_configured": true, 00:15:50.808 "data_offset": 2048, 00:15:50.808 "data_size": 63488 00:15:50.808 }, 00:15:50.808 { 00:15:50.808 "name": null, 00:15:50.808 "uuid": "993dafa5-6cd9-4436-ac9b-daa896739905", 00:15:50.808 "is_configured": false, 00:15:50.808 "data_offset": 0, 00:15:50.808 "data_size": 63488 00:15:50.808 }, 00:15:50.808 { 00:15:50.808 "name": "BaseBdev3", 00:15:50.808 "uuid": "5223f90b-a20a-435d-a770-d3dc8cb8803d", 00:15:50.808 "is_configured": true, 00:15:50.808 "data_offset": 2048, 00:15:50.808 "data_size": 63488 00:15:50.808 }, 00:15:50.808 { 00:15:50.808 "name": "BaseBdev4", 00:15:50.808 "uuid": "7db914f6-291b-4409-84f8-3d09846bdec7", 00:15:50.808 "is_configured": true, 00:15:50.808 "data_offset": 2048, 00:15:50.808 "data_size": 63488 00:15:50.808 } 00:15:50.808 ] 00:15:50.808 }' 00:15:50.808 13:25:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:50.808 13:25:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.068 13:25:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.068 13:25:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:51.068 13:25:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.068 13:25:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.068 13:25:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.068 13:25:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:15:51.068 13:25:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:15:51.068 13:25:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.068 13:25:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.068 [2024-11-17 13:25:40.196455] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:51.068 13:25:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.068 13:25:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:51.068 13:25:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:51.068 13:25:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:51.068 13:25:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:51.068 13:25:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:51.068 13:25:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:51.068 13:25:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:51.068 13:25:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:51.068 13:25:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:51.068 13:25:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:51.068 13:25:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:51.068 13:25:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.068 13:25:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.068 13:25:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.068 13:25:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.068 13:25:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:51.068 "name": "Existed_Raid", 00:15:51.068 "uuid": "034715d4-1439-49e3-a1dc-7dbdfa6cf4b4", 00:15:51.068 "strip_size_kb": 64, 00:15:51.068 "state": "configuring", 00:15:51.068 "raid_level": "raid5f", 00:15:51.068 "superblock": true, 00:15:51.068 "num_base_bdevs": 4, 00:15:51.068 "num_base_bdevs_discovered": 2, 00:15:51.068 "num_base_bdevs_operational": 4, 00:15:51.068 "base_bdevs_list": [ 00:15:51.068 { 00:15:51.068 "name": "BaseBdev1", 00:15:51.068 "uuid": "32cf6315-abc6-4a01-ae51-958b8a2c3701", 00:15:51.068 "is_configured": true, 00:15:51.068 "data_offset": 2048, 00:15:51.068 "data_size": 63488 00:15:51.068 }, 00:15:51.068 { 00:15:51.068 "name": null, 00:15:51.068 "uuid": "993dafa5-6cd9-4436-ac9b-daa896739905", 00:15:51.068 "is_configured": false, 00:15:51.068 "data_offset": 0, 00:15:51.068 "data_size": 63488 00:15:51.068 }, 00:15:51.068 { 00:15:51.068 "name": null, 00:15:51.068 "uuid": "5223f90b-a20a-435d-a770-d3dc8cb8803d", 00:15:51.068 "is_configured": false, 00:15:51.068 "data_offset": 0, 00:15:51.068 "data_size": 63488 00:15:51.068 }, 00:15:51.068 { 00:15:51.068 "name": "BaseBdev4", 00:15:51.068 "uuid": "7db914f6-291b-4409-84f8-3d09846bdec7", 00:15:51.068 "is_configured": true, 00:15:51.068 "data_offset": 2048, 00:15:51.068 "data_size": 63488 00:15:51.068 } 00:15:51.068 ] 00:15:51.068 }' 00:15:51.068 13:25:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:51.068 13:25:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.639 13:25:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:51.639 13:25:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.639 13:25:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.639 13:25:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.639 13:25:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.639 13:25:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:15:51.639 13:25:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:51.639 13:25:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.639 13:25:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.639 [2024-11-17 13:25:40.599802] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:51.639 13:25:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.639 13:25:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:51.639 13:25:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:51.639 13:25:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:51.639 13:25:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:51.639 13:25:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:51.639 13:25:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:51.639 13:25:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:51.639 13:25:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:51.639 13:25:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:51.639 13:25:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:51.639 13:25:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.639 13:25:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.639 13:25:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.639 13:25:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:51.639 13:25:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.639 13:25:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:51.639 "name": "Existed_Raid", 00:15:51.639 "uuid": "034715d4-1439-49e3-a1dc-7dbdfa6cf4b4", 00:15:51.639 "strip_size_kb": 64, 00:15:51.639 "state": "configuring", 00:15:51.639 "raid_level": "raid5f", 00:15:51.639 "superblock": true, 00:15:51.639 "num_base_bdevs": 4, 00:15:51.639 "num_base_bdevs_discovered": 3, 00:15:51.639 "num_base_bdevs_operational": 4, 00:15:51.639 "base_bdevs_list": [ 00:15:51.639 { 00:15:51.639 "name": "BaseBdev1", 00:15:51.639 "uuid": "32cf6315-abc6-4a01-ae51-958b8a2c3701", 00:15:51.639 "is_configured": true, 00:15:51.639 "data_offset": 2048, 00:15:51.639 "data_size": 63488 00:15:51.639 }, 00:15:51.639 { 00:15:51.639 "name": null, 00:15:51.639 "uuid": "993dafa5-6cd9-4436-ac9b-daa896739905", 00:15:51.639 "is_configured": false, 00:15:51.639 "data_offset": 0, 00:15:51.639 "data_size": 63488 00:15:51.639 }, 00:15:51.639 { 00:15:51.639 "name": "BaseBdev3", 00:15:51.639 "uuid": "5223f90b-a20a-435d-a770-d3dc8cb8803d", 00:15:51.639 "is_configured": true, 00:15:51.639 "data_offset": 2048, 00:15:51.639 "data_size": 63488 00:15:51.639 }, 00:15:51.639 { 00:15:51.639 "name": "BaseBdev4", 00:15:51.639 "uuid": "7db914f6-291b-4409-84f8-3d09846bdec7", 00:15:51.639 "is_configured": true, 00:15:51.639 "data_offset": 2048, 00:15:51.639 "data_size": 63488 00:15:51.639 } 00:15:51.639 ] 00:15:51.639 }' 00:15:51.639 13:25:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:51.639 13:25:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.906 13:25:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.906 13:25:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:51.906 13:25:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.906 13:25:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.906 13:25:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.906 13:25:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:15:51.906 13:25:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:51.906 13:25:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.906 13:25:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.906 [2024-11-17 13:25:41.094982] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:52.166 13:25:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.166 13:25:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:52.166 13:25:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:52.166 13:25:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:52.166 13:25:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:52.166 13:25:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:52.166 13:25:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:52.166 13:25:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:52.166 13:25:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:52.166 13:25:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:52.166 13:25:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:52.166 13:25:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:52.166 13:25:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:52.166 13:25:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.166 13:25:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.166 13:25:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.166 13:25:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:52.166 "name": "Existed_Raid", 00:15:52.166 "uuid": "034715d4-1439-49e3-a1dc-7dbdfa6cf4b4", 00:15:52.166 "strip_size_kb": 64, 00:15:52.166 "state": "configuring", 00:15:52.166 "raid_level": "raid5f", 00:15:52.166 "superblock": true, 00:15:52.166 "num_base_bdevs": 4, 00:15:52.166 "num_base_bdevs_discovered": 2, 00:15:52.166 "num_base_bdevs_operational": 4, 00:15:52.166 "base_bdevs_list": [ 00:15:52.166 { 00:15:52.166 "name": null, 00:15:52.166 "uuid": "32cf6315-abc6-4a01-ae51-958b8a2c3701", 00:15:52.166 "is_configured": false, 00:15:52.166 "data_offset": 0, 00:15:52.166 "data_size": 63488 00:15:52.166 }, 00:15:52.166 { 00:15:52.166 "name": null, 00:15:52.167 "uuid": "993dafa5-6cd9-4436-ac9b-daa896739905", 00:15:52.167 "is_configured": false, 00:15:52.167 "data_offset": 0, 00:15:52.167 "data_size": 63488 00:15:52.167 }, 00:15:52.167 { 00:15:52.167 "name": "BaseBdev3", 00:15:52.167 "uuid": "5223f90b-a20a-435d-a770-d3dc8cb8803d", 00:15:52.167 "is_configured": true, 00:15:52.167 "data_offset": 2048, 00:15:52.167 "data_size": 63488 00:15:52.167 }, 00:15:52.167 { 00:15:52.167 "name": "BaseBdev4", 00:15:52.167 "uuid": "7db914f6-291b-4409-84f8-3d09846bdec7", 00:15:52.167 "is_configured": true, 00:15:52.167 "data_offset": 2048, 00:15:52.167 "data_size": 63488 00:15:52.167 } 00:15:52.167 ] 00:15:52.167 }' 00:15:52.167 13:25:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:52.167 13:25:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.736 13:25:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:52.736 13:25:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.736 13:25:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.736 13:25:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:52.736 13:25:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.736 13:25:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:15:52.736 13:25:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:52.736 13:25:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.736 13:25:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.736 [2024-11-17 13:25:41.706310] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:52.736 13:25:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.736 13:25:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:52.736 13:25:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:52.736 13:25:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:52.736 13:25:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:52.736 13:25:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:52.736 13:25:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:52.736 13:25:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:52.736 13:25:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:52.736 13:25:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:52.736 13:25:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:52.736 13:25:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:52.736 13:25:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:52.736 13:25:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.736 13:25:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.736 13:25:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.736 13:25:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:52.736 "name": "Existed_Raid", 00:15:52.736 "uuid": "034715d4-1439-49e3-a1dc-7dbdfa6cf4b4", 00:15:52.736 "strip_size_kb": 64, 00:15:52.736 "state": "configuring", 00:15:52.736 "raid_level": "raid5f", 00:15:52.736 "superblock": true, 00:15:52.736 "num_base_bdevs": 4, 00:15:52.736 "num_base_bdevs_discovered": 3, 00:15:52.736 "num_base_bdevs_operational": 4, 00:15:52.736 "base_bdevs_list": [ 00:15:52.736 { 00:15:52.736 "name": null, 00:15:52.736 "uuid": "32cf6315-abc6-4a01-ae51-958b8a2c3701", 00:15:52.736 "is_configured": false, 00:15:52.736 "data_offset": 0, 00:15:52.736 "data_size": 63488 00:15:52.736 }, 00:15:52.736 { 00:15:52.736 "name": "BaseBdev2", 00:15:52.736 "uuid": "993dafa5-6cd9-4436-ac9b-daa896739905", 00:15:52.736 "is_configured": true, 00:15:52.736 "data_offset": 2048, 00:15:52.736 "data_size": 63488 00:15:52.736 }, 00:15:52.736 { 00:15:52.736 "name": "BaseBdev3", 00:15:52.736 "uuid": "5223f90b-a20a-435d-a770-d3dc8cb8803d", 00:15:52.736 "is_configured": true, 00:15:52.736 "data_offset": 2048, 00:15:52.736 "data_size": 63488 00:15:52.736 }, 00:15:52.736 { 00:15:52.736 "name": "BaseBdev4", 00:15:52.736 "uuid": "7db914f6-291b-4409-84f8-3d09846bdec7", 00:15:52.736 "is_configured": true, 00:15:52.737 "data_offset": 2048, 00:15:52.737 "data_size": 63488 00:15:52.737 } 00:15:52.737 ] 00:15:52.737 }' 00:15:52.737 13:25:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:52.737 13:25:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.996 13:25:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:52.996 13:25:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:52.996 13:25:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.996 13:25:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.996 13:25:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.257 13:25:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:15:53.257 13:25:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:53.257 13:25:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:53.257 13:25:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.257 13:25:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.257 13:25:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.257 13:25:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 32cf6315-abc6-4a01-ae51-958b8a2c3701 00:15:53.257 13:25:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.257 13:25:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.257 [2024-11-17 13:25:42.331928] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:53.257 [2024-11-17 13:25:42.332234] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:53.257 [2024-11-17 13:25:42.332282] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:53.257 [2024-11-17 13:25:42.332558] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:15:53.257 NewBaseBdev 00:15:53.257 13:25:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.257 13:25:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:15:53.257 13:25:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:15:53.257 13:25:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:53.257 13:25:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:53.257 13:25:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:53.257 13:25:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:53.257 13:25:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:53.257 13:25:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.257 13:25:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.257 [2024-11-17 13:25:42.339310] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:53.257 [2024-11-17 13:25:42.339337] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:15:53.257 [2024-11-17 13:25:42.339481] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:53.257 13:25:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.257 13:25:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:53.257 13:25:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.257 13:25:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.257 [ 00:15:53.257 { 00:15:53.257 "name": "NewBaseBdev", 00:15:53.257 "aliases": [ 00:15:53.257 "32cf6315-abc6-4a01-ae51-958b8a2c3701" 00:15:53.257 ], 00:15:53.257 "product_name": "Malloc disk", 00:15:53.257 "block_size": 512, 00:15:53.257 "num_blocks": 65536, 00:15:53.257 "uuid": "32cf6315-abc6-4a01-ae51-958b8a2c3701", 00:15:53.257 "assigned_rate_limits": { 00:15:53.257 "rw_ios_per_sec": 0, 00:15:53.257 "rw_mbytes_per_sec": 0, 00:15:53.257 "r_mbytes_per_sec": 0, 00:15:53.257 "w_mbytes_per_sec": 0 00:15:53.257 }, 00:15:53.257 "claimed": true, 00:15:53.257 "claim_type": "exclusive_write", 00:15:53.257 "zoned": false, 00:15:53.257 "supported_io_types": { 00:15:53.257 "read": true, 00:15:53.257 "write": true, 00:15:53.257 "unmap": true, 00:15:53.257 "flush": true, 00:15:53.257 "reset": true, 00:15:53.257 "nvme_admin": false, 00:15:53.257 "nvme_io": false, 00:15:53.257 "nvme_io_md": false, 00:15:53.257 "write_zeroes": true, 00:15:53.257 "zcopy": true, 00:15:53.257 "get_zone_info": false, 00:15:53.257 "zone_management": false, 00:15:53.257 "zone_append": false, 00:15:53.257 "compare": false, 00:15:53.257 "compare_and_write": false, 00:15:53.257 "abort": true, 00:15:53.257 "seek_hole": false, 00:15:53.257 "seek_data": false, 00:15:53.257 "copy": true, 00:15:53.257 "nvme_iov_md": false 00:15:53.257 }, 00:15:53.257 "memory_domains": [ 00:15:53.257 { 00:15:53.257 "dma_device_id": "system", 00:15:53.257 "dma_device_type": 1 00:15:53.257 }, 00:15:53.257 { 00:15:53.257 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:53.257 "dma_device_type": 2 00:15:53.257 } 00:15:53.257 ], 00:15:53.257 "driver_specific": {} 00:15:53.257 } 00:15:53.257 ] 00:15:53.257 13:25:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.257 13:25:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:53.257 13:25:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:15:53.257 13:25:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:53.257 13:25:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:53.257 13:25:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:53.257 13:25:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:53.257 13:25:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:53.257 13:25:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:53.257 13:25:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:53.257 13:25:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:53.257 13:25:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:53.257 13:25:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:53.257 13:25:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:53.257 13:25:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.257 13:25:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.257 13:25:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.257 13:25:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:53.257 "name": "Existed_Raid", 00:15:53.257 "uuid": "034715d4-1439-49e3-a1dc-7dbdfa6cf4b4", 00:15:53.257 "strip_size_kb": 64, 00:15:53.258 "state": "online", 00:15:53.258 "raid_level": "raid5f", 00:15:53.258 "superblock": true, 00:15:53.258 "num_base_bdevs": 4, 00:15:53.258 "num_base_bdevs_discovered": 4, 00:15:53.258 "num_base_bdevs_operational": 4, 00:15:53.258 "base_bdevs_list": [ 00:15:53.258 { 00:15:53.258 "name": "NewBaseBdev", 00:15:53.258 "uuid": "32cf6315-abc6-4a01-ae51-958b8a2c3701", 00:15:53.258 "is_configured": true, 00:15:53.258 "data_offset": 2048, 00:15:53.258 "data_size": 63488 00:15:53.258 }, 00:15:53.258 { 00:15:53.258 "name": "BaseBdev2", 00:15:53.258 "uuid": "993dafa5-6cd9-4436-ac9b-daa896739905", 00:15:53.258 "is_configured": true, 00:15:53.258 "data_offset": 2048, 00:15:53.258 "data_size": 63488 00:15:53.258 }, 00:15:53.258 { 00:15:53.258 "name": "BaseBdev3", 00:15:53.258 "uuid": "5223f90b-a20a-435d-a770-d3dc8cb8803d", 00:15:53.258 "is_configured": true, 00:15:53.258 "data_offset": 2048, 00:15:53.258 "data_size": 63488 00:15:53.258 }, 00:15:53.258 { 00:15:53.258 "name": "BaseBdev4", 00:15:53.258 "uuid": "7db914f6-291b-4409-84f8-3d09846bdec7", 00:15:53.258 "is_configured": true, 00:15:53.258 "data_offset": 2048, 00:15:53.258 "data_size": 63488 00:15:53.258 } 00:15:53.258 ] 00:15:53.258 }' 00:15:53.258 13:25:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:53.258 13:25:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.827 13:25:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:15:53.827 13:25:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:53.827 13:25:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:53.827 13:25:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:53.827 13:25:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:15:53.827 13:25:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:53.827 13:25:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:53.827 13:25:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.827 13:25:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.827 13:25:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:53.827 [2024-11-17 13:25:42.798606] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:53.827 13:25:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.827 13:25:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:53.827 "name": "Existed_Raid", 00:15:53.827 "aliases": [ 00:15:53.827 "034715d4-1439-49e3-a1dc-7dbdfa6cf4b4" 00:15:53.827 ], 00:15:53.827 "product_name": "Raid Volume", 00:15:53.827 "block_size": 512, 00:15:53.827 "num_blocks": 190464, 00:15:53.827 "uuid": "034715d4-1439-49e3-a1dc-7dbdfa6cf4b4", 00:15:53.827 "assigned_rate_limits": { 00:15:53.827 "rw_ios_per_sec": 0, 00:15:53.827 "rw_mbytes_per_sec": 0, 00:15:53.827 "r_mbytes_per_sec": 0, 00:15:53.827 "w_mbytes_per_sec": 0 00:15:53.827 }, 00:15:53.827 "claimed": false, 00:15:53.827 "zoned": false, 00:15:53.827 "supported_io_types": { 00:15:53.827 "read": true, 00:15:53.827 "write": true, 00:15:53.827 "unmap": false, 00:15:53.827 "flush": false, 00:15:53.827 "reset": true, 00:15:53.827 "nvme_admin": false, 00:15:53.827 "nvme_io": false, 00:15:53.827 "nvme_io_md": false, 00:15:53.827 "write_zeroes": true, 00:15:53.827 "zcopy": false, 00:15:53.827 "get_zone_info": false, 00:15:53.827 "zone_management": false, 00:15:53.827 "zone_append": false, 00:15:53.827 "compare": false, 00:15:53.827 "compare_and_write": false, 00:15:53.827 "abort": false, 00:15:53.827 "seek_hole": false, 00:15:53.827 "seek_data": false, 00:15:53.827 "copy": false, 00:15:53.827 "nvme_iov_md": false 00:15:53.827 }, 00:15:53.827 "driver_specific": { 00:15:53.828 "raid": { 00:15:53.828 "uuid": "034715d4-1439-49e3-a1dc-7dbdfa6cf4b4", 00:15:53.828 "strip_size_kb": 64, 00:15:53.828 "state": "online", 00:15:53.828 "raid_level": "raid5f", 00:15:53.828 "superblock": true, 00:15:53.828 "num_base_bdevs": 4, 00:15:53.828 "num_base_bdevs_discovered": 4, 00:15:53.828 "num_base_bdevs_operational": 4, 00:15:53.828 "base_bdevs_list": [ 00:15:53.828 { 00:15:53.828 "name": "NewBaseBdev", 00:15:53.828 "uuid": "32cf6315-abc6-4a01-ae51-958b8a2c3701", 00:15:53.828 "is_configured": true, 00:15:53.828 "data_offset": 2048, 00:15:53.828 "data_size": 63488 00:15:53.828 }, 00:15:53.828 { 00:15:53.828 "name": "BaseBdev2", 00:15:53.828 "uuid": "993dafa5-6cd9-4436-ac9b-daa896739905", 00:15:53.828 "is_configured": true, 00:15:53.828 "data_offset": 2048, 00:15:53.828 "data_size": 63488 00:15:53.828 }, 00:15:53.828 { 00:15:53.828 "name": "BaseBdev3", 00:15:53.828 "uuid": "5223f90b-a20a-435d-a770-d3dc8cb8803d", 00:15:53.828 "is_configured": true, 00:15:53.828 "data_offset": 2048, 00:15:53.828 "data_size": 63488 00:15:53.828 }, 00:15:53.828 { 00:15:53.828 "name": "BaseBdev4", 00:15:53.828 "uuid": "7db914f6-291b-4409-84f8-3d09846bdec7", 00:15:53.828 "is_configured": true, 00:15:53.828 "data_offset": 2048, 00:15:53.828 "data_size": 63488 00:15:53.828 } 00:15:53.828 ] 00:15:53.828 } 00:15:53.828 } 00:15:53.828 }' 00:15:53.828 13:25:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:53.828 13:25:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:15:53.828 BaseBdev2 00:15:53.828 BaseBdev3 00:15:53.828 BaseBdev4' 00:15:53.828 13:25:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:53.828 13:25:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:53.828 13:25:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:53.828 13:25:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:15:53.828 13:25:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:53.828 13:25:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.828 13:25:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.828 13:25:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.828 13:25:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:53.828 13:25:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:53.828 13:25:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:53.828 13:25:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:53.828 13:25:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.828 13:25:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.828 13:25:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:53.828 13:25:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.828 13:25:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:53.828 13:25:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:53.828 13:25:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:53.828 13:25:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:53.828 13:25:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.828 13:25:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.828 13:25:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:53.828 13:25:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.828 13:25:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:53.828 13:25:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:53.828 13:25:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:53.828 13:25:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:15:53.828 13:25:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.828 13:25:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.828 13:25:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:54.088 13:25:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.088 13:25:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:54.088 13:25:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:54.088 13:25:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:54.088 13:25:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.088 13:25:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.088 [2024-11-17 13:25:43.101843] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:54.088 [2024-11-17 13:25:43.101876] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:54.088 [2024-11-17 13:25:43.101953] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:54.088 [2024-11-17 13:25:43.102261] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:54.088 [2024-11-17 13:25:43.102283] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:15:54.088 13:25:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.088 13:25:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 83304 00:15:54.088 13:25:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 83304 ']' 00:15:54.088 13:25:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 83304 00:15:54.088 13:25:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:15:54.088 13:25:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:54.088 13:25:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83304 00:15:54.088 13:25:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:54.088 killing process with pid 83304 00:15:54.088 13:25:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:54.088 13:25:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83304' 00:15:54.088 13:25:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 83304 00:15:54.088 [2024-11-17 13:25:43.150901] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:54.088 13:25:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 83304 00:15:54.347 [2024-11-17 13:25:43.534739] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:55.727 13:25:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:15:55.727 00:15:55.727 real 0m11.321s 00:15:55.727 user 0m17.931s 00:15:55.727 sys 0m2.072s 00:15:55.727 13:25:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:55.727 13:25:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.727 ************************************ 00:15:55.727 END TEST raid5f_state_function_test_sb 00:15:55.727 ************************************ 00:15:55.727 13:25:44 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 4 00:15:55.727 13:25:44 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:15:55.727 13:25:44 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:55.727 13:25:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:55.727 ************************************ 00:15:55.727 START TEST raid5f_superblock_test 00:15:55.727 ************************************ 00:15:55.727 13:25:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid5f 4 00:15:55.727 13:25:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:15:55.727 13:25:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:15:55.727 13:25:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:15:55.727 13:25:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:15:55.727 13:25:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:15:55.727 13:25:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:15:55.727 13:25:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:15:55.728 13:25:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:15:55.728 13:25:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:15:55.728 13:25:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:15:55.728 13:25:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:15:55.728 13:25:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:15:55.728 13:25:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:15:55.728 13:25:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:15:55.728 13:25:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:15:55.728 13:25:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:15:55.728 13:25:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=83971 00:15:55.728 13:25:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:15:55.728 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:55.728 13:25:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 83971 00:15:55.728 13:25:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 83971 ']' 00:15:55.728 13:25:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:55.728 13:25:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:55.728 13:25:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:55.728 13:25:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:55.728 13:25:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.728 [2024-11-17 13:25:44.763450] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:15:55.728 [2024-11-17 13:25:44.763606] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83971 ] 00:15:55.728 [2024-11-17 13:25:44.935550] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:55.987 [2024-11-17 13:25:45.041441] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:56.246 [2024-11-17 13:25:45.225658] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:56.246 [2024-11-17 13:25:45.225792] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:56.507 13:25:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:56.507 13:25:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:15:56.507 13:25:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:15:56.507 13:25:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:56.507 13:25:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:15:56.507 13:25:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:15:56.507 13:25:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:56.507 13:25:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:56.507 13:25:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:56.507 13:25:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:56.507 13:25:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:15:56.507 13:25:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.507 13:25:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.507 malloc1 00:15:56.507 13:25:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.507 13:25:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:56.507 13:25:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.507 13:25:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.507 [2024-11-17 13:25:45.638220] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:56.507 [2024-11-17 13:25:45.638552] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:56.507 [2024-11-17 13:25:45.638632] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:56.507 [2024-11-17 13:25:45.638683] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:56.507 [2024-11-17 13:25:45.640683] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:56.507 [2024-11-17 13:25:45.640851] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:56.507 pt1 00:15:56.507 13:25:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.507 13:25:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:56.507 13:25:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:56.507 13:25:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:15:56.507 13:25:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:15:56.507 13:25:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:56.507 13:25:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:56.507 13:25:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:56.507 13:25:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:56.507 13:25:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:15:56.507 13:25:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.507 13:25:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.507 malloc2 00:15:56.507 13:25:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.507 13:25:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:56.507 13:25:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.507 13:25:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.507 [2024-11-17 13:25:45.691554] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:56.507 [2024-11-17 13:25:45.691742] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:56.507 [2024-11-17 13:25:45.691812] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:56.507 [2024-11-17 13:25:45.691855] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:56.507 [2024-11-17 13:25:45.693809] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:56.507 [2024-11-17 13:25:45.693899] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:56.507 pt2 00:15:56.507 13:25:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.507 13:25:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:56.507 13:25:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:56.507 13:25:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:15:56.507 13:25:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:15:56.507 13:25:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:15:56.507 13:25:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:56.507 13:25:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:56.507 13:25:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:56.507 13:25:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:15:56.507 13:25:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.507 13:25:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.768 malloc3 00:15:56.768 13:25:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.768 13:25:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:56.768 13:25:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.768 13:25:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.768 [2024-11-17 13:25:45.780298] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:56.768 [2024-11-17 13:25:45.780410] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:56.768 [2024-11-17 13:25:45.780446] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:56.768 [2024-11-17 13:25:45.780474] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:56.768 [2024-11-17 13:25:45.782404] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:56.768 [2024-11-17 13:25:45.782473] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:56.768 pt3 00:15:56.768 13:25:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.768 13:25:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:56.768 13:25:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:56.768 13:25:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:15:56.768 13:25:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:15:56.768 13:25:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:15:56.768 13:25:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:56.768 13:25:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:56.768 13:25:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:56.768 13:25:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:15:56.768 13:25:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.768 13:25:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.768 malloc4 00:15:56.768 13:25:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.768 13:25:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:15:56.768 13:25:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.768 13:25:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.768 [2024-11-17 13:25:45.838199] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:15:56.768 [2024-11-17 13:25:45.838311] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:56.768 [2024-11-17 13:25:45.838343] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:56.768 [2024-11-17 13:25:45.838370] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:56.768 [2024-11-17 13:25:45.840344] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:56.768 [2024-11-17 13:25:45.840408] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:15:56.768 pt4 00:15:56.768 13:25:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.768 13:25:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:56.768 13:25:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:56.768 13:25:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:15:56.768 13:25:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.768 13:25:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.768 [2024-11-17 13:25:45.850235] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:56.768 [2024-11-17 13:25:45.851971] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:56.768 [2024-11-17 13:25:45.852066] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:56.768 [2024-11-17 13:25:45.852141] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:15:56.768 [2024-11-17 13:25:45.852392] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:56.768 [2024-11-17 13:25:45.852440] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:56.768 [2024-11-17 13:25:45.852710] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:56.768 [2024-11-17 13:25:45.859011] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:56.768 [2024-11-17 13:25:45.859032] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:56.768 [2024-11-17 13:25:45.859205] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:56.768 13:25:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.768 13:25:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:15:56.768 13:25:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:56.768 13:25:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:56.768 13:25:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:56.768 13:25:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:56.768 13:25:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:56.768 13:25:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:56.768 13:25:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:56.768 13:25:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:56.768 13:25:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:56.768 13:25:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:56.768 13:25:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:56.768 13:25:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.768 13:25:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.768 13:25:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.768 13:25:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:56.768 "name": "raid_bdev1", 00:15:56.768 "uuid": "5da421b1-b47f-4b3b-aa17-18355587bcb1", 00:15:56.768 "strip_size_kb": 64, 00:15:56.768 "state": "online", 00:15:56.768 "raid_level": "raid5f", 00:15:56.768 "superblock": true, 00:15:56.768 "num_base_bdevs": 4, 00:15:56.768 "num_base_bdevs_discovered": 4, 00:15:56.768 "num_base_bdevs_operational": 4, 00:15:56.768 "base_bdevs_list": [ 00:15:56.768 { 00:15:56.768 "name": "pt1", 00:15:56.768 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:56.768 "is_configured": true, 00:15:56.768 "data_offset": 2048, 00:15:56.768 "data_size": 63488 00:15:56.768 }, 00:15:56.768 { 00:15:56.768 "name": "pt2", 00:15:56.768 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:56.768 "is_configured": true, 00:15:56.768 "data_offset": 2048, 00:15:56.768 "data_size": 63488 00:15:56.768 }, 00:15:56.768 { 00:15:56.768 "name": "pt3", 00:15:56.768 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:56.768 "is_configured": true, 00:15:56.768 "data_offset": 2048, 00:15:56.768 "data_size": 63488 00:15:56.769 }, 00:15:56.769 { 00:15:56.769 "name": "pt4", 00:15:56.769 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:56.769 "is_configured": true, 00:15:56.769 "data_offset": 2048, 00:15:56.769 "data_size": 63488 00:15:56.769 } 00:15:56.769 ] 00:15:56.769 }' 00:15:56.769 13:25:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:56.769 13:25:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.338 13:25:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:15:57.338 13:25:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:57.338 13:25:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:57.338 13:25:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:57.338 13:25:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:57.338 13:25:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:57.338 13:25:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:57.338 13:25:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.338 13:25:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.338 13:25:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:57.338 [2024-11-17 13:25:46.342378] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:57.338 13:25:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.338 13:25:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:57.338 "name": "raid_bdev1", 00:15:57.338 "aliases": [ 00:15:57.338 "5da421b1-b47f-4b3b-aa17-18355587bcb1" 00:15:57.338 ], 00:15:57.338 "product_name": "Raid Volume", 00:15:57.338 "block_size": 512, 00:15:57.338 "num_blocks": 190464, 00:15:57.338 "uuid": "5da421b1-b47f-4b3b-aa17-18355587bcb1", 00:15:57.338 "assigned_rate_limits": { 00:15:57.338 "rw_ios_per_sec": 0, 00:15:57.338 "rw_mbytes_per_sec": 0, 00:15:57.338 "r_mbytes_per_sec": 0, 00:15:57.338 "w_mbytes_per_sec": 0 00:15:57.338 }, 00:15:57.338 "claimed": false, 00:15:57.338 "zoned": false, 00:15:57.338 "supported_io_types": { 00:15:57.338 "read": true, 00:15:57.338 "write": true, 00:15:57.338 "unmap": false, 00:15:57.338 "flush": false, 00:15:57.338 "reset": true, 00:15:57.338 "nvme_admin": false, 00:15:57.338 "nvme_io": false, 00:15:57.338 "nvme_io_md": false, 00:15:57.338 "write_zeroes": true, 00:15:57.338 "zcopy": false, 00:15:57.338 "get_zone_info": false, 00:15:57.338 "zone_management": false, 00:15:57.338 "zone_append": false, 00:15:57.338 "compare": false, 00:15:57.338 "compare_and_write": false, 00:15:57.338 "abort": false, 00:15:57.338 "seek_hole": false, 00:15:57.338 "seek_data": false, 00:15:57.338 "copy": false, 00:15:57.338 "nvme_iov_md": false 00:15:57.338 }, 00:15:57.338 "driver_specific": { 00:15:57.338 "raid": { 00:15:57.338 "uuid": "5da421b1-b47f-4b3b-aa17-18355587bcb1", 00:15:57.338 "strip_size_kb": 64, 00:15:57.338 "state": "online", 00:15:57.338 "raid_level": "raid5f", 00:15:57.338 "superblock": true, 00:15:57.338 "num_base_bdevs": 4, 00:15:57.338 "num_base_bdevs_discovered": 4, 00:15:57.338 "num_base_bdevs_operational": 4, 00:15:57.338 "base_bdevs_list": [ 00:15:57.338 { 00:15:57.338 "name": "pt1", 00:15:57.338 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:57.338 "is_configured": true, 00:15:57.338 "data_offset": 2048, 00:15:57.338 "data_size": 63488 00:15:57.338 }, 00:15:57.338 { 00:15:57.338 "name": "pt2", 00:15:57.338 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:57.338 "is_configured": true, 00:15:57.338 "data_offset": 2048, 00:15:57.338 "data_size": 63488 00:15:57.338 }, 00:15:57.338 { 00:15:57.338 "name": "pt3", 00:15:57.338 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:57.338 "is_configured": true, 00:15:57.338 "data_offset": 2048, 00:15:57.338 "data_size": 63488 00:15:57.338 }, 00:15:57.338 { 00:15:57.338 "name": "pt4", 00:15:57.338 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:57.338 "is_configured": true, 00:15:57.338 "data_offset": 2048, 00:15:57.338 "data_size": 63488 00:15:57.338 } 00:15:57.338 ] 00:15:57.338 } 00:15:57.338 } 00:15:57.338 }' 00:15:57.338 13:25:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:57.338 13:25:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:57.338 pt2 00:15:57.338 pt3 00:15:57.338 pt4' 00:15:57.338 13:25:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:57.338 13:25:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:57.338 13:25:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:57.338 13:25:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:57.338 13:25:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:57.338 13:25:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.338 13:25:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.338 13:25:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.338 13:25:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:57.338 13:25:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:57.338 13:25:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:57.338 13:25:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:57.338 13:25:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:57.338 13:25:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.338 13:25:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.338 13:25:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.338 13:25:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:57.338 13:25:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:57.339 13:25:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:57.339 13:25:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:57.339 13:25:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:15:57.339 13:25:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.339 13:25:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.598 13:25:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.598 13:25:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:57.598 13:25:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:57.598 13:25:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:57.598 13:25:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:57.598 13:25:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:15:57.598 13:25:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.598 13:25:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.598 13:25:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.598 13:25:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:57.598 13:25:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:57.598 13:25:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:57.598 13:25:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:15:57.598 13:25:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.598 13:25:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.598 [2024-11-17 13:25:46.653775] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:57.598 13:25:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.598 13:25:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=5da421b1-b47f-4b3b-aa17-18355587bcb1 00:15:57.598 13:25:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 5da421b1-b47f-4b3b-aa17-18355587bcb1 ']' 00:15:57.598 13:25:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:57.598 13:25:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.598 13:25:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.598 [2024-11-17 13:25:46.693549] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:57.598 [2024-11-17 13:25:46.693575] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:57.598 [2024-11-17 13:25:46.693644] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:57.598 [2024-11-17 13:25:46.693722] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:57.598 [2024-11-17 13:25:46.693736] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:57.598 13:25:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.598 13:25:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:57.598 13:25:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:15:57.598 13:25:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.598 13:25:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.598 13:25:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.598 13:25:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:15:57.598 13:25:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:15:57.598 13:25:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:57.598 13:25:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:15:57.598 13:25:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.599 13:25:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.599 13:25:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.599 13:25:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:57.599 13:25:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:15:57.599 13:25:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.599 13:25:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.599 13:25:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.599 13:25:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:57.599 13:25:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:15:57.599 13:25:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.599 13:25:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.599 13:25:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.599 13:25:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:57.599 13:25:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:15:57.599 13:25:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.599 13:25:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.599 13:25:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.599 13:25:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:57.599 13:25:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:15:57.599 13:25:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.599 13:25:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.858 13:25:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.858 13:25:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:15:57.858 13:25:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:15:57.858 13:25:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:15:57.858 13:25:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:15:57.858 13:25:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:15:57.858 13:25:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:57.858 13:25:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:15:57.858 13:25:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:57.858 13:25:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:15:57.859 13:25:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.859 13:25:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.859 [2024-11-17 13:25:46.857321] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:57.859 [2024-11-17 13:25:46.859108] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:57.859 [2024-11-17 13:25:46.859194] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:15:57.859 [2024-11-17 13:25:46.859256] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:15:57.859 [2024-11-17 13:25:46.859345] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:15:57.859 [2024-11-17 13:25:46.859456] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:15:57.859 [2024-11-17 13:25:46.859520] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:15:57.859 [2024-11-17 13:25:46.859540] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:15:57.859 [2024-11-17 13:25:46.859560] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:57.859 [2024-11-17 13:25:46.859571] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:15:57.859 request: 00:15:57.859 { 00:15:57.859 "name": "raid_bdev1", 00:15:57.859 "raid_level": "raid5f", 00:15:57.859 "base_bdevs": [ 00:15:57.859 "malloc1", 00:15:57.859 "malloc2", 00:15:57.859 "malloc3", 00:15:57.859 "malloc4" 00:15:57.859 ], 00:15:57.859 "strip_size_kb": 64, 00:15:57.859 "superblock": false, 00:15:57.859 "method": "bdev_raid_create", 00:15:57.859 "req_id": 1 00:15:57.859 } 00:15:57.859 Got JSON-RPC error response 00:15:57.859 response: 00:15:57.859 { 00:15:57.859 "code": -17, 00:15:57.859 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:57.859 } 00:15:57.859 13:25:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:15:57.859 13:25:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:15:57.859 13:25:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:57.859 13:25:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:57.859 13:25:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:57.859 13:25:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:15:57.859 13:25:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:57.859 13:25:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.859 13:25:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.859 13:25:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.859 13:25:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:15:57.859 13:25:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:15:57.859 13:25:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:57.859 13:25:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.859 13:25:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.859 [2024-11-17 13:25:46.913198] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:57.859 [2024-11-17 13:25:46.913301] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:57.859 [2024-11-17 13:25:46.913330] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:15:57.859 [2024-11-17 13:25:46.913357] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:57.859 [2024-11-17 13:25:46.915494] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:57.859 [2024-11-17 13:25:46.915565] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:57.859 [2024-11-17 13:25:46.915646] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:57.859 [2024-11-17 13:25:46.915734] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:57.859 pt1 00:15:57.859 13:25:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.859 13:25:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:15:57.859 13:25:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:57.859 13:25:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:57.859 13:25:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:57.859 13:25:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:57.859 13:25:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:57.859 13:25:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:57.859 13:25:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:57.859 13:25:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:57.859 13:25:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:57.859 13:25:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:57.859 13:25:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.859 13:25:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.859 13:25:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:57.859 13:25:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.859 13:25:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:57.859 "name": "raid_bdev1", 00:15:57.859 "uuid": "5da421b1-b47f-4b3b-aa17-18355587bcb1", 00:15:57.859 "strip_size_kb": 64, 00:15:57.859 "state": "configuring", 00:15:57.859 "raid_level": "raid5f", 00:15:57.859 "superblock": true, 00:15:57.859 "num_base_bdevs": 4, 00:15:57.859 "num_base_bdevs_discovered": 1, 00:15:57.859 "num_base_bdevs_operational": 4, 00:15:57.859 "base_bdevs_list": [ 00:15:57.859 { 00:15:57.859 "name": "pt1", 00:15:57.859 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:57.859 "is_configured": true, 00:15:57.859 "data_offset": 2048, 00:15:57.859 "data_size": 63488 00:15:57.859 }, 00:15:57.859 { 00:15:57.859 "name": null, 00:15:57.859 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:57.859 "is_configured": false, 00:15:57.859 "data_offset": 2048, 00:15:57.859 "data_size": 63488 00:15:57.859 }, 00:15:57.859 { 00:15:57.859 "name": null, 00:15:57.859 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:57.859 "is_configured": false, 00:15:57.859 "data_offset": 2048, 00:15:57.859 "data_size": 63488 00:15:57.859 }, 00:15:57.859 { 00:15:57.859 "name": null, 00:15:57.859 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:57.859 "is_configured": false, 00:15:57.859 "data_offset": 2048, 00:15:57.859 "data_size": 63488 00:15:57.859 } 00:15:57.859 ] 00:15:57.859 }' 00:15:57.859 13:25:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:57.859 13:25:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.119 13:25:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:15:58.119 13:25:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:58.119 13:25:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.119 13:25:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.119 [2024-11-17 13:25:47.300535] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:58.119 [2024-11-17 13:25:47.300651] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:58.119 [2024-11-17 13:25:47.300687] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:15:58.119 [2024-11-17 13:25:47.300715] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:58.119 [2024-11-17 13:25:47.301169] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:58.119 [2024-11-17 13:25:47.301240] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:58.119 [2024-11-17 13:25:47.301364] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:58.119 [2024-11-17 13:25:47.301415] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:58.119 pt2 00:15:58.119 13:25:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.119 13:25:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:15:58.119 13:25:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.119 13:25:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.119 [2024-11-17 13:25:47.308526] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:15:58.119 13:25:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.119 13:25:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:15:58.119 13:25:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:58.119 13:25:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:58.119 13:25:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:58.119 13:25:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:58.119 13:25:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:58.119 13:25:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:58.119 13:25:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:58.119 13:25:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:58.119 13:25:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:58.119 13:25:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:58.119 13:25:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.119 13:25:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.119 13:25:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:58.119 13:25:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.379 13:25:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:58.379 "name": "raid_bdev1", 00:15:58.379 "uuid": "5da421b1-b47f-4b3b-aa17-18355587bcb1", 00:15:58.379 "strip_size_kb": 64, 00:15:58.379 "state": "configuring", 00:15:58.379 "raid_level": "raid5f", 00:15:58.379 "superblock": true, 00:15:58.379 "num_base_bdevs": 4, 00:15:58.379 "num_base_bdevs_discovered": 1, 00:15:58.379 "num_base_bdevs_operational": 4, 00:15:58.379 "base_bdevs_list": [ 00:15:58.379 { 00:15:58.379 "name": "pt1", 00:15:58.379 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:58.379 "is_configured": true, 00:15:58.379 "data_offset": 2048, 00:15:58.379 "data_size": 63488 00:15:58.379 }, 00:15:58.379 { 00:15:58.379 "name": null, 00:15:58.379 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:58.379 "is_configured": false, 00:15:58.379 "data_offset": 0, 00:15:58.379 "data_size": 63488 00:15:58.379 }, 00:15:58.379 { 00:15:58.379 "name": null, 00:15:58.379 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:58.379 "is_configured": false, 00:15:58.379 "data_offset": 2048, 00:15:58.379 "data_size": 63488 00:15:58.379 }, 00:15:58.379 { 00:15:58.379 "name": null, 00:15:58.379 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:58.379 "is_configured": false, 00:15:58.379 "data_offset": 2048, 00:15:58.379 "data_size": 63488 00:15:58.379 } 00:15:58.379 ] 00:15:58.379 }' 00:15:58.379 13:25:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:58.379 13:25:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.639 13:25:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:15:58.639 13:25:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:58.639 13:25:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:58.639 13:25:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.639 13:25:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.639 [2024-11-17 13:25:47.739788] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:58.639 [2024-11-17 13:25:47.739900] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:58.639 [2024-11-17 13:25:47.739936] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:15:58.639 [2024-11-17 13:25:47.739962] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:58.639 [2024-11-17 13:25:47.740454] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:58.639 [2024-11-17 13:25:47.740511] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:58.639 [2024-11-17 13:25:47.740634] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:58.639 [2024-11-17 13:25:47.740684] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:58.639 pt2 00:15:58.639 13:25:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.639 13:25:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:58.639 13:25:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:58.639 13:25:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:58.639 13:25:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.639 13:25:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.639 [2024-11-17 13:25:47.751734] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:58.639 [2024-11-17 13:25:47.751821] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:58.639 [2024-11-17 13:25:47.751851] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:15:58.639 [2024-11-17 13:25:47.751876] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:58.639 [2024-11-17 13:25:47.752261] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:58.639 [2024-11-17 13:25:47.752311] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:58.640 [2024-11-17 13:25:47.752405] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:58.640 [2024-11-17 13:25:47.752448] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:58.640 pt3 00:15:58.640 13:25:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.640 13:25:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:58.640 13:25:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:58.640 13:25:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:15:58.640 13:25:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.640 13:25:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.640 [2024-11-17 13:25:47.763692] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:15:58.640 [2024-11-17 13:25:47.763740] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:58.640 [2024-11-17 13:25:47.763756] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:15:58.640 [2024-11-17 13:25:47.763763] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:58.640 [2024-11-17 13:25:47.764084] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:58.640 [2024-11-17 13:25:47.764099] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:15:58.640 [2024-11-17 13:25:47.764152] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:15:58.640 [2024-11-17 13:25:47.764167] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:15:58.640 [2024-11-17 13:25:47.764318] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:58.640 [2024-11-17 13:25:47.764327] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:58.640 [2024-11-17 13:25:47.764533] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:15:58.640 [2024-11-17 13:25:47.771109] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:58.640 [2024-11-17 13:25:47.771133] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:15:58.640 [2024-11-17 13:25:47.771322] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:58.640 pt4 00:15:58.640 13:25:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.640 13:25:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:58.640 13:25:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:58.640 13:25:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:15:58.640 13:25:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:58.640 13:25:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:58.640 13:25:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:58.640 13:25:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:58.640 13:25:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:58.640 13:25:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:58.640 13:25:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:58.640 13:25:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:58.640 13:25:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:58.640 13:25:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:58.640 13:25:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:58.640 13:25:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.640 13:25:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.640 13:25:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.640 13:25:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:58.640 "name": "raid_bdev1", 00:15:58.640 "uuid": "5da421b1-b47f-4b3b-aa17-18355587bcb1", 00:15:58.640 "strip_size_kb": 64, 00:15:58.640 "state": "online", 00:15:58.640 "raid_level": "raid5f", 00:15:58.640 "superblock": true, 00:15:58.640 "num_base_bdevs": 4, 00:15:58.640 "num_base_bdevs_discovered": 4, 00:15:58.640 "num_base_bdevs_operational": 4, 00:15:58.640 "base_bdevs_list": [ 00:15:58.640 { 00:15:58.640 "name": "pt1", 00:15:58.640 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:58.640 "is_configured": true, 00:15:58.640 "data_offset": 2048, 00:15:58.640 "data_size": 63488 00:15:58.640 }, 00:15:58.640 { 00:15:58.640 "name": "pt2", 00:15:58.640 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:58.640 "is_configured": true, 00:15:58.640 "data_offset": 2048, 00:15:58.640 "data_size": 63488 00:15:58.640 }, 00:15:58.640 { 00:15:58.640 "name": "pt3", 00:15:58.640 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:58.640 "is_configured": true, 00:15:58.640 "data_offset": 2048, 00:15:58.640 "data_size": 63488 00:15:58.640 }, 00:15:58.640 { 00:15:58.640 "name": "pt4", 00:15:58.640 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:58.640 "is_configured": true, 00:15:58.640 "data_offset": 2048, 00:15:58.640 "data_size": 63488 00:15:58.640 } 00:15:58.640 ] 00:15:58.640 }' 00:15:58.640 13:25:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:58.640 13:25:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.209 13:25:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:15:59.209 13:25:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:59.209 13:25:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:59.209 13:25:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:59.209 13:25:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:59.209 13:25:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:59.209 13:25:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:59.209 13:25:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.209 13:25:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.209 13:25:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:59.209 [2024-11-17 13:25:48.250519] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:59.209 13:25:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.209 13:25:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:59.209 "name": "raid_bdev1", 00:15:59.209 "aliases": [ 00:15:59.209 "5da421b1-b47f-4b3b-aa17-18355587bcb1" 00:15:59.209 ], 00:15:59.209 "product_name": "Raid Volume", 00:15:59.209 "block_size": 512, 00:15:59.209 "num_blocks": 190464, 00:15:59.209 "uuid": "5da421b1-b47f-4b3b-aa17-18355587bcb1", 00:15:59.209 "assigned_rate_limits": { 00:15:59.209 "rw_ios_per_sec": 0, 00:15:59.209 "rw_mbytes_per_sec": 0, 00:15:59.209 "r_mbytes_per_sec": 0, 00:15:59.209 "w_mbytes_per_sec": 0 00:15:59.209 }, 00:15:59.209 "claimed": false, 00:15:59.209 "zoned": false, 00:15:59.209 "supported_io_types": { 00:15:59.209 "read": true, 00:15:59.209 "write": true, 00:15:59.209 "unmap": false, 00:15:59.209 "flush": false, 00:15:59.209 "reset": true, 00:15:59.209 "nvme_admin": false, 00:15:59.209 "nvme_io": false, 00:15:59.209 "nvme_io_md": false, 00:15:59.209 "write_zeroes": true, 00:15:59.209 "zcopy": false, 00:15:59.209 "get_zone_info": false, 00:15:59.209 "zone_management": false, 00:15:59.209 "zone_append": false, 00:15:59.209 "compare": false, 00:15:59.209 "compare_and_write": false, 00:15:59.209 "abort": false, 00:15:59.209 "seek_hole": false, 00:15:59.209 "seek_data": false, 00:15:59.209 "copy": false, 00:15:59.209 "nvme_iov_md": false 00:15:59.209 }, 00:15:59.209 "driver_specific": { 00:15:59.209 "raid": { 00:15:59.209 "uuid": "5da421b1-b47f-4b3b-aa17-18355587bcb1", 00:15:59.209 "strip_size_kb": 64, 00:15:59.209 "state": "online", 00:15:59.209 "raid_level": "raid5f", 00:15:59.209 "superblock": true, 00:15:59.209 "num_base_bdevs": 4, 00:15:59.209 "num_base_bdevs_discovered": 4, 00:15:59.209 "num_base_bdevs_operational": 4, 00:15:59.209 "base_bdevs_list": [ 00:15:59.209 { 00:15:59.209 "name": "pt1", 00:15:59.209 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:59.209 "is_configured": true, 00:15:59.209 "data_offset": 2048, 00:15:59.209 "data_size": 63488 00:15:59.209 }, 00:15:59.209 { 00:15:59.209 "name": "pt2", 00:15:59.209 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:59.209 "is_configured": true, 00:15:59.209 "data_offset": 2048, 00:15:59.209 "data_size": 63488 00:15:59.209 }, 00:15:59.209 { 00:15:59.209 "name": "pt3", 00:15:59.209 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:59.209 "is_configured": true, 00:15:59.209 "data_offset": 2048, 00:15:59.209 "data_size": 63488 00:15:59.209 }, 00:15:59.209 { 00:15:59.209 "name": "pt4", 00:15:59.209 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:59.209 "is_configured": true, 00:15:59.209 "data_offset": 2048, 00:15:59.209 "data_size": 63488 00:15:59.209 } 00:15:59.209 ] 00:15:59.210 } 00:15:59.210 } 00:15:59.210 }' 00:15:59.210 13:25:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:59.210 13:25:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:59.210 pt2 00:15:59.210 pt3 00:15:59.210 pt4' 00:15:59.210 13:25:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:59.210 13:25:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:59.210 13:25:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:59.210 13:25:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:59.210 13:25:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.210 13:25:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.210 13:25:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:59.210 13:25:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.210 13:25:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:59.210 13:25:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:59.210 13:25:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:59.210 13:25:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:59.210 13:25:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:59.210 13:25:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.210 13:25:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.470 13:25:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.470 13:25:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:59.470 13:25:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:59.470 13:25:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:59.470 13:25:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:59.470 13:25:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:15:59.470 13:25:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.470 13:25:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.470 13:25:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.470 13:25:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:59.470 13:25:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:59.470 13:25:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:59.470 13:25:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:15:59.470 13:25:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.470 13:25:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.470 13:25:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:59.470 13:25:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.470 13:25:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:59.470 13:25:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:59.470 13:25:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:59.470 13:25:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:15:59.470 13:25:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.470 13:25:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.470 [2024-11-17 13:25:48.570159] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:59.470 13:25:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.470 13:25:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 5da421b1-b47f-4b3b-aa17-18355587bcb1 '!=' 5da421b1-b47f-4b3b-aa17-18355587bcb1 ']' 00:15:59.470 13:25:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:15:59.470 13:25:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:59.470 13:25:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:15:59.470 13:25:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:15:59.470 13:25:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.470 13:25:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.470 [2024-11-17 13:25:48.597979] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:15:59.470 13:25:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.470 13:25:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:59.470 13:25:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:59.470 13:25:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:59.470 13:25:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:59.470 13:25:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:59.470 13:25:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:59.470 13:25:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:59.470 13:25:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:59.470 13:25:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:59.470 13:25:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:59.470 13:25:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:59.470 13:25:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.470 13:25:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.470 13:25:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.470 13:25:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.470 13:25:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:59.470 "name": "raid_bdev1", 00:15:59.470 "uuid": "5da421b1-b47f-4b3b-aa17-18355587bcb1", 00:15:59.470 "strip_size_kb": 64, 00:15:59.470 "state": "online", 00:15:59.470 "raid_level": "raid5f", 00:15:59.470 "superblock": true, 00:15:59.470 "num_base_bdevs": 4, 00:15:59.470 "num_base_bdevs_discovered": 3, 00:15:59.470 "num_base_bdevs_operational": 3, 00:15:59.470 "base_bdevs_list": [ 00:15:59.470 { 00:15:59.470 "name": null, 00:15:59.470 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:59.470 "is_configured": false, 00:15:59.470 "data_offset": 0, 00:15:59.470 "data_size": 63488 00:15:59.470 }, 00:15:59.470 { 00:15:59.470 "name": "pt2", 00:15:59.470 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:59.470 "is_configured": true, 00:15:59.470 "data_offset": 2048, 00:15:59.470 "data_size": 63488 00:15:59.470 }, 00:15:59.470 { 00:15:59.470 "name": "pt3", 00:15:59.470 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:59.470 "is_configured": true, 00:15:59.470 "data_offset": 2048, 00:15:59.470 "data_size": 63488 00:15:59.470 }, 00:15:59.470 { 00:15:59.470 "name": "pt4", 00:15:59.470 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:59.470 "is_configured": true, 00:15:59.470 "data_offset": 2048, 00:15:59.470 "data_size": 63488 00:15:59.470 } 00:15:59.470 ] 00:15:59.470 }' 00:15:59.470 13:25:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:59.470 13:25:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.041 13:25:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:00.041 13:25:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.041 13:25:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.041 [2024-11-17 13:25:49.033179] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:00.041 [2024-11-17 13:25:49.033220] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:00.041 [2024-11-17 13:25:49.033293] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:00.041 [2024-11-17 13:25:49.033363] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:00.041 [2024-11-17 13:25:49.033372] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:16:00.041 13:25:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.041 13:25:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:00.041 13:25:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:16:00.041 13:25:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.041 13:25:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.041 13:25:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.041 13:25:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:16:00.041 13:25:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:16:00.041 13:25:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:16:00.041 13:25:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:00.041 13:25:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:16:00.041 13:25:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.041 13:25:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.041 13:25:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.041 13:25:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:16:00.041 13:25:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:00.041 13:25:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:16:00.041 13:25:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.041 13:25:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.041 13:25:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.041 13:25:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:16:00.041 13:25:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:00.041 13:25:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:16:00.041 13:25:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.041 13:25:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.041 13:25:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.041 13:25:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:16:00.041 13:25:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:00.041 13:25:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:16:00.041 13:25:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:16:00.041 13:25:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:00.041 13:25:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.041 13:25:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.041 [2024-11-17 13:25:49.121023] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:00.041 [2024-11-17 13:25:49.121115] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:00.041 [2024-11-17 13:25:49.121148] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:16:00.041 [2024-11-17 13:25:49.121173] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:00.041 [2024-11-17 13:25:49.123193] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:00.041 [2024-11-17 13:25:49.123273] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:00.041 [2024-11-17 13:25:49.123368] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:00.041 [2024-11-17 13:25:49.123451] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:00.041 pt2 00:16:00.041 13:25:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.041 13:25:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:16:00.041 13:25:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:00.041 13:25:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:00.041 13:25:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:00.041 13:25:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:00.041 13:25:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:00.041 13:25:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:00.041 13:25:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:00.041 13:25:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:00.041 13:25:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:00.041 13:25:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:00.041 13:25:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:00.041 13:25:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.041 13:25:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.042 13:25:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.042 13:25:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:00.042 "name": "raid_bdev1", 00:16:00.042 "uuid": "5da421b1-b47f-4b3b-aa17-18355587bcb1", 00:16:00.042 "strip_size_kb": 64, 00:16:00.042 "state": "configuring", 00:16:00.042 "raid_level": "raid5f", 00:16:00.042 "superblock": true, 00:16:00.042 "num_base_bdevs": 4, 00:16:00.042 "num_base_bdevs_discovered": 1, 00:16:00.042 "num_base_bdevs_operational": 3, 00:16:00.042 "base_bdevs_list": [ 00:16:00.042 { 00:16:00.042 "name": null, 00:16:00.042 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:00.042 "is_configured": false, 00:16:00.042 "data_offset": 2048, 00:16:00.042 "data_size": 63488 00:16:00.042 }, 00:16:00.042 { 00:16:00.042 "name": "pt2", 00:16:00.042 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:00.042 "is_configured": true, 00:16:00.042 "data_offset": 2048, 00:16:00.042 "data_size": 63488 00:16:00.042 }, 00:16:00.042 { 00:16:00.042 "name": null, 00:16:00.042 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:00.042 "is_configured": false, 00:16:00.042 "data_offset": 2048, 00:16:00.042 "data_size": 63488 00:16:00.042 }, 00:16:00.042 { 00:16:00.042 "name": null, 00:16:00.042 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:00.042 "is_configured": false, 00:16:00.042 "data_offset": 2048, 00:16:00.042 "data_size": 63488 00:16:00.042 } 00:16:00.042 ] 00:16:00.042 }' 00:16:00.042 13:25:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:00.042 13:25:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.610 13:25:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:16:00.610 13:25:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:16:00.610 13:25:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:00.610 13:25:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.610 13:25:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.610 [2024-11-17 13:25:49.604214] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:00.610 [2024-11-17 13:25:49.604286] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:00.610 [2024-11-17 13:25:49.604307] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:16:00.610 [2024-11-17 13:25:49.604317] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:00.610 [2024-11-17 13:25:49.604754] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:00.610 [2024-11-17 13:25:49.604779] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:00.610 [2024-11-17 13:25:49.604855] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:16:00.610 [2024-11-17 13:25:49.604882] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:00.610 pt3 00:16:00.610 13:25:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.611 13:25:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:16:00.611 13:25:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:00.611 13:25:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:00.611 13:25:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:00.611 13:25:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:00.611 13:25:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:00.611 13:25:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:00.611 13:25:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:00.611 13:25:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:00.611 13:25:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:00.611 13:25:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:00.611 13:25:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:00.611 13:25:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.611 13:25:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.611 13:25:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.611 13:25:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:00.611 "name": "raid_bdev1", 00:16:00.611 "uuid": "5da421b1-b47f-4b3b-aa17-18355587bcb1", 00:16:00.611 "strip_size_kb": 64, 00:16:00.611 "state": "configuring", 00:16:00.611 "raid_level": "raid5f", 00:16:00.611 "superblock": true, 00:16:00.611 "num_base_bdevs": 4, 00:16:00.611 "num_base_bdevs_discovered": 2, 00:16:00.611 "num_base_bdevs_operational": 3, 00:16:00.611 "base_bdevs_list": [ 00:16:00.611 { 00:16:00.611 "name": null, 00:16:00.611 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:00.611 "is_configured": false, 00:16:00.611 "data_offset": 2048, 00:16:00.611 "data_size": 63488 00:16:00.611 }, 00:16:00.611 { 00:16:00.611 "name": "pt2", 00:16:00.611 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:00.611 "is_configured": true, 00:16:00.611 "data_offset": 2048, 00:16:00.611 "data_size": 63488 00:16:00.611 }, 00:16:00.611 { 00:16:00.611 "name": "pt3", 00:16:00.611 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:00.611 "is_configured": true, 00:16:00.611 "data_offset": 2048, 00:16:00.611 "data_size": 63488 00:16:00.611 }, 00:16:00.611 { 00:16:00.611 "name": null, 00:16:00.611 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:00.611 "is_configured": false, 00:16:00.611 "data_offset": 2048, 00:16:00.611 "data_size": 63488 00:16:00.611 } 00:16:00.611 ] 00:16:00.611 }' 00:16:00.611 13:25:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:00.611 13:25:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.871 13:25:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:16:00.871 13:25:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:16:00.871 13:25:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:16:00.871 13:25:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:00.871 13:25:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.871 13:25:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.871 [2024-11-17 13:25:50.031494] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:00.871 [2024-11-17 13:25:50.031609] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:00.871 [2024-11-17 13:25:50.031645] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:16:00.871 [2024-11-17 13:25:50.031670] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:00.871 [2024-11-17 13:25:50.032150] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:00.871 [2024-11-17 13:25:50.032215] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:00.871 [2024-11-17 13:25:50.032338] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:16:00.871 [2024-11-17 13:25:50.032391] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:00.871 [2024-11-17 13:25:50.032579] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:00.871 [2024-11-17 13:25:50.032616] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:00.871 [2024-11-17 13:25:50.032904] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:16:00.871 [2024-11-17 13:25:50.039806] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:00.871 [2024-11-17 13:25:50.039865] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:16:00.871 [2024-11-17 13:25:50.040206] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:00.871 pt4 00:16:00.871 13:25:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.871 13:25:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:00.871 13:25:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:00.871 13:25:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:00.871 13:25:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:00.871 13:25:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:00.871 13:25:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:00.871 13:25:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:00.871 13:25:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:00.871 13:25:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:00.871 13:25:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:00.871 13:25:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:00.871 13:25:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.871 13:25:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.871 13:25:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:00.871 13:25:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.871 13:25:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:00.871 "name": "raid_bdev1", 00:16:00.871 "uuid": "5da421b1-b47f-4b3b-aa17-18355587bcb1", 00:16:00.871 "strip_size_kb": 64, 00:16:00.871 "state": "online", 00:16:00.871 "raid_level": "raid5f", 00:16:00.871 "superblock": true, 00:16:00.871 "num_base_bdevs": 4, 00:16:00.871 "num_base_bdevs_discovered": 3, 00:16:00.871 "num_base_bdevs_operational": 3, 00:16:00.871 "base_bdevs_list": [ 00:16:00.871 { 00:16:00.871 "name": null, 00:16:00.871 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:00.871 "is_configured": false, 00:16:00.871 "data_offset": 2048, 00:16:00.871 "data_size": 63488 00:16:00.872 }, 00:16:00.872 { 00:16:00.872 "name": "pt2", 00:16:00.872 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:00.872 "is_configured": true, 00:16:00.872 "data_offset": 2048, 00:16:00.872 "data_size": 63488 00:16:00.872 }, 00:16:00.872 { 00:16:00.872 "name": "pt3", 00:16:00.872 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:00.872 "is_configured": true, 00:16:00.872 "data_offset": 2048, 00:16:00.872 "data_size": 63488 00:16:00.872 }, 00:16:00.872 { 00:16:00.872 "name": "pt4", 00:16:00.872 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:00.872 "is_configured": true, 00:16:00.872 "data_offset": 2048, 00:16:00.872 "data_size": 63488 00:16:00.872 } 00:16:00.872 ] 00:16:00.872 }' 00:16:00.872 13:25:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:01.131 13:25:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.391 13:25:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:01.391 13:25:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.391 13:25:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.391 [2024-11-17 13:25:50.479516] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:01.391 [2024-11-17 13:25:50.479592] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:01.391 [2024-11-17 13:25:50.479672] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:01.391 [2024-11-17 13:25:50.479791] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:01.391 [2024-11-17 13:25:50.479853] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:16:01.391 13:25:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.391 13:25:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:16:01.391 13:25:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:01.391 13:25:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.391 13:25:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.391 13:25:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.391 13:25:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:16:01.391 13:25:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:16:01.391 13:25:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:16:01.391 13:25:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:16:01.391 13:25:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:16:01.391 13:25:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.391 13:25:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.391 13:25:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.391 13:25:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:01.391 13:25:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.391 13:25:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.391 [2024-11-17 13:25:50.535420] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:01.391 [2024-11-17 13:25:50.535485] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:01.391 [2024-11-17 13:25:50.535508] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:16:01.391 [2024-11-17 13:25:50.535519] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:01.391 [2024-11-17 13:25:50.537732] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:01.391 [2024-11-17 13:25:50.537807] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:01.391 [2024-11-17 13:25:50.537890] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:01.391 [2024-11-17 13:25:50.537941] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:01.391 [2024-11-17 13:25:50.538070] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:16:01.391 [2024-11-17 13:25:50.538081] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:01.391 [2024-11-17 13:25:50.538094] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:16:01.391 [2024-11-17 13:25:50.538177] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:01.391 [2024-11-17 13:25:50.538296] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:01.391 pt1 00:16:01.391 13:25:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.391 13:25:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:16:01.391 13:25:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:16:01.391 13:25:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:01.391 13:25:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:01.391 13:25:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:01.391 13:25:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:01.391 13:25:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:01.391 13:25:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:01.392 13:25:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:01.392 13:25:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:01.392 13:25:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:01.392 13:25:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:01.392 13:25:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.392 13:25:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.392 13:25:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:01.392 13:25:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.392 13:25:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:01.392 "name": "raid_bdev1", 00:16:01.392 "uuid": "5da421b1-b47f-4b3b-aa17-18355587bcb1", 00:16:01.392 "strip_size_kb": 64, 00:16:01.392 "state": "configuring", 00:16:01.392 "raid_level": "raid5f", 00:16:01.392 "superblock": true, 00:16:01.392 "num_base_bdevs": 4, 00:16:01.392 "num_base_bdevs_discovered": 2, 00:16:01.392 "num_base_bdevs_operational": 3, 00:16:01.392 "base_bdevs_list": [ 00:16:01.392 { 00:16:01.392 "name": null, 00:16:01.392 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:01.392 "is_configured": false, 00:16:01.392 "data_offset": 2048, 00:16:01.392 "data_size": 63488 00:16:01.392 }, 00:16:01.392 { 00:16:01.392 "name": "pt2", 00:16:01.392 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:01.392 "is_configured": true, 00:16:01.392 "data_offset": 2048, 00:16:01.392 "data_size": 63488 00:16:01.392 }, 00:16:01.392 { 00:16:01.392 "name": "pt3", 00:16:01.392 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:01.392 "is_configured": true, 00:16:01.392 "data_offset": 2048, 00:16:01.392 "data_size": 63488 00:16:01.392 }, 00:16:01.392 { 00:16:01.392 "name": null, 00:16:01.392 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:01.392 "is_configured": false, 00:16:01.392 "data_offset": 2048, 00:16:01.392 "data_size": 63488 00:16:01.392 } 00:16:01.392 ] 00:16:01.392 }' 00:16:01.392 13:25:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:01.392 13:25:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.962 13:25:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:16:01.962 13:25:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:16:01.962 13:25:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.962 13:25:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.962 13:25:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.962 13:25:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:16:01.962 13:25:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:01.962 13:25:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.962 13:25:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.962 [2024-11-17 13:25:50.974856] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:01.962 [2024-11-17 13:25:50.974964] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:01.962 [2024-11-17 13:25:50.975023] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:16:01.962 [2024-11-17 13:25:50.975061] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:01.962 [2024-11-17 13:25:50.975592] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:01.962 [2024-11-17 13:25:50.975651] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:01.962 [2024-11-17 13:25:50.975775] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:16:01.962 [2024-11-17 13:25:50.975836] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:01.962 [2024-11-17 13:25:50.976032] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:16:01.962 [2024-11-17 13:25:50.976071] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:01.962 [2024-11-17 13:25:50.976360] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:16:01.962 [2024-11-17 13:25:50.983655] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:16:01.962 [2024-11-17 13:25:50.983712] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:16:01.962 [2024-11-17 13:25:50.984008] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:01.962 pt4 00:16:01.962 13:25:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.962 13:25:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:01.962 13:25:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:01.962 13:25:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:01.962 13:25:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:01.962 13:25:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:01.962 13:25:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:01.962 13:25:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:01.962 13:25:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:01.962 13:25:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:01.962 13:25:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:01.962 13:25:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:01.962 13:25:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:01.962 13:25:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.962 13:25:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.962 13:25:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.962 13:25:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:01.962 "name": "raid_bdev1", 00:16:01.962 "uuid": "5da421b1-b47f-4b3b-aa17-18355587bcb1", 00:16:01.962 "strip_size_kb": 64, 00:16:01.962 "state": "online", 00:16:01.962 "raid_level": "raid5f", 00:16:01.962 "superblock": true, 00:16:01.962 "num_base_bdevs": 4, 00:16:01.962 "num_base_bdevs_discovered": 3, 00:16:01.962 "num_base_bdevs_operational": 3, 00:16:01.962 "base_bdevs_list": [ 00:16:01.962 { 00:16:01.962 "name": null, 00:16:01.962 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:01.962 "is_configured": false, 00:16:01.962 "data_offset": 2048, 00:16:01.962 "data_size": 63488 00:16:01.962 }, 00:16:01.962 { 00:16:01.962 "name": "pt2", 00:16:01.963 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:01.963 "is_configured": true, 00:16:01.963 "data_offset": 2048, 00:16:01.963 "data_size": 63488 00:16:01.963 }, 00:16:01.963 { 00:16:01.963 "name": "pt3", 00:16:01.963 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:01.963 "is_configured": true, 00:16:01.963 "data_offset": 2048, 00:16:01.963 "data_size": 63488 00:16:01.963 }, 00:16:01.963 { 00:16:01.963 "name": "pt4", 00:16:01.963 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:01.963 "is_configured": true, 00:16:01.963 "data_offset": 2048, 00:16:01.963 "data_size": 63488 00:16:01.963 } 00:16:01.963 ] 00:16:01.963 }' 00:16:01.963 13:25:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:01.963 13:25:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.233 13:25:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:16:02.233 13:25:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:16:02.233 13:25:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.233 13:25:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.233 13:25:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.511 13:25:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:16:02.511 13:25:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:16:02.511 13:25:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:02.511 13:25:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.511 13:25:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.511 [2024-11-17 13:25:51.483786] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:02.511 13:25:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.511 13:25:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 5da421b1-b47f-4b3b-aa17-18355587bcb1 '!=' 5da421b1-b47f-4b3b-aa17-18355587bcb1 ']' 00:16:02.511 13:25:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 83971 00:16:02.511 13:25:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 83971 ']' 00:16:02.511 13:25:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # kill -0 83971 00:16:02.511 13:25:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # uname 00:16:02.511 13:25:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:02.511 13:25:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83971 00:16:02.511 13:25:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:02.511 13:25:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:02.511 killing process with pid 83971 00:16:02.511 13:25:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83971' 00:16:02.511 13:25:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@973 -- # kill 83971 00:16:02.511 [2024-11-17 13:25:51.553065] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:02.511 [2024-11-17 13:25:51.553145] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:02.511 [2024-11-17 13:25:51.553235] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:02.511 [2024-11-17 13:25:51.553248] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:16:02.511 13:25:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@978 -- # wait 83971 00:16:02.775 [2024-11-17 13:25:51.927778] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:04.154 13:25:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:16:04.154 00:16:04.154 real 0m8.297s 00:16:04.154 user 0m13.014s 00:16:04.154 sys 0m1.546s 00:16:04.154 13:25:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:04.154 13:25:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.154 ************************************ 00:16:04.154 END TEST raid5f_superblock_test 00:16:04.154 ************************************ 00:16:04.154 13:25:53 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:16:04.154 13:25:53 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 4 false false true 00:16:04.154 13:25:53 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:16:04.154 13:25:53 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:04.154 13:25:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:04.154 ************************************ 00:16:04.154 START TEST raid5f_rebuild_test 00:16:04.154 ************************************ 00:16:04.154 13:25:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 4 false false true 00:16:04.154 13:25:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:16:04.154 13:25:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:16:04.154 13:25:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:16:04.154 13:25:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:16:04.154 13:25:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:16:04.154 13:25:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:04.154 13:25:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:04.154 13:25:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:04.154 13:25:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:04.154 13:25:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:04.154 13:25:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:04.154 13:25:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:04.154 13:25:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:04.154 13:25:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:16:04.154 13:25:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:04.154 13:25:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:04.154 13:25:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:16:04.154 13:25:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:04.154 13:25:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:04.154 13:25:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:04.154 13:25:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:04.154 13:25:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:04.154 13:25:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:04.154 13:25:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:04.154 13:25:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:04.154 13:25:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:04.154 13:25:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:16:04.154 13:25:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:16:04.154 13:25:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:16:04.154 13:25:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:16:04.154 13:25:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:16:04.154 13:25:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=84454 00:16:04.154 13:25:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:04.154 13:25:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 84454 00:16:04.154 13:25:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 84454 ']' 00:16:04.154 13:25:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:04.154 13:25:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:04.154 13:25:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:04.154 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:04.154 13:25:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:04.154 13:25:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.154 [2024-11-17 13:25:53.145114] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:16:04.154 [2024-11-17 13:25:53.145277] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84454 ] 00:16:04.154 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:04.154 Zero copy mechanism will not be used. 00:16:04.155 [2024-11-17 13:25:53.317090] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:04.414 [2024-11-17 13:25:53.424506] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:04.414 [2024-11-17 13:25:53.612155] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:04.414 [2024-11-17 13:25:53.612279] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:04.985 13:25:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:04.985 13:25:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:16:04.985 13:25:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:04.985 13:25:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:04.985 13:25:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.985 13:25:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.985 BaseBdev1_malloc 00:16:04.985 13:25:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.985 13:25:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:04.985 13:25:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.985 13:25:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.985 [2024-11-17 13:25:54.015037] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:04.985 [2024-11-17 13:25:54.015188] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:04.985 [2024-11-17 13:25:54.015251] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:04.985 [2024-11-17 13:25:54.015284] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:04.985 [2024-11-17 13:25:54.017252] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:04.985 [2024-11-17 13:25:54.017320] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:04.985 BaseBdev1 00:16:04.985 13:25:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.985 13:25:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:04.985 13:25:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:04.985 13:25:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.985 13:25:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.985 BaseBdev2_malloc 00:16:04.985 13:25:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.985 13:25:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:04.985 13:25:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.985 13:25:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.985 [2024-11-17 13:25:54.069444] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:04.985 [2024-11-17 13:25:54.069507] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:04.985 [2024-11-17 13:25:54.069525] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:04.985 [2024-11-17 13:25:54.069535] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:04.985 [2024-11-17 13:25:54.071534] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:04.985 [2024-11-17 13:25:54.071637] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:04.985 BaseBdev2 00:16:04.985 13:25:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.985 13:25:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:04.985 13:25:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:04.985 13:25:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.985 13:25:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.985 BaseBdev3_malloc 00:16:04.985 13:25:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.985 13:25:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:16:04.985 13:25:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.985 13:25:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.985 [2024-11-17 13:25:54.132413] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:16:04.985 [2024-11-17 13:25:54.132467] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:04.985 [2024-11-17 13:25:54.132487] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:04.985 [2024-11-17 13:25:54.132497] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:04.985 [2024-11-17 13:25:54.134452] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:04.985 [2024-11-17 13:25:54.134491] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:04.985 BaseBdev3 00:16:04.985 13:25:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.985 13:25:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:04.985 13:25:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:16:04.985 13:25:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.985 13:25:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.985 BaseBdev4_malloc 00:16:04.985 13:25:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.985 13:25:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:16:04.985 13:25:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.985 13:25:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.985 [2024-11-17 13:25:54.185836] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:16:04.985 [2024-11-17 13:25:54.185892] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:04.985 [2024-11-17 13:25:54.185909] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:16:04.985 [2024-11-17 13:25:54.185919] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:04.985 [2024-11-17 13:25:54.187949] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:04.985 [2024-11-17 13:25:54.187988] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:16:04.985 BaseBdev4 00:16:04.985 13:25:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.985 13:25:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:16:04.986 13:25:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.986 13:25:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.246 spare_malloc 00:16:05.246 13:25:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.246 13:25:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:05.246 13:25:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.246 13:25:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.246 spare_delay 00:16:05.246 13:25:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.246 13:25:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:05.246 13:25:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.246 13:25:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.246 [2024-11-17 13:25:54.253373] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:05.246 [2024-11-17 13:25:54.253429] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:05.246 [2024-11-17 13:25:54.253447] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:16:05.246 [2024-11-17 13:25:54.253458] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:05.246 [2024-11-17 13:25:54.255509] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:05.246 [2024-11-17 13:25:54.255547] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:05.246 spare 00:16:05.246 13:25:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.246 13:25:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:16:05.246 13:25:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.246 13:25:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.246 [2024-11-17 13:25:54.265404] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:05.246 [2024-11-17 13:25:54.267149] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:05.246 [2024-11-17 13:25:54.267222] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:05.246 [2024-11-17 13:25:54.267272] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:05.246 [2024-11-17 13:25:54.267351] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:05.246 [2024-11-17 13:25:54.267368] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:16:05.246 [2024-11-17 13:25:54.267612] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:05.246 [2024-11-17 13:25:54.274693] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:05.246 [2024-11-17 13:25:54.274715] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:05.246 [2024-11-17 13:25:54.274907] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:05.246 13:25:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.247 13:25:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:16:05.247 13:25:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:05.247 13:25:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:05.247 13:25:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:05.247 13:25:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:05.247 13:25:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:05.247 13:25:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:05.247 13:25:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:05.247 13:25:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:05.247 13:25:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:05.247 13:25:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:05.247 13:25:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:05.247 13:25:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.247 13:25:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.247 13:25:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.247 13:25:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:05.247 "name": "raid_bdev1", 00:16:05.247 "uuid": "2718d865-9313-45f6-b296-1f7f1b0cfead", 00:16:05.247 "strip_size_kb": 64, 00:16:05.247 "state": "online", 00:16:05.247 "raid_level": "raid5f", 00:16:05.247 "superblock": false, 00:16:05.247 "num_base_bdevs": 4, 00:16:05.247 "num_base_bdevs_discovered": 4, 00:16:05.247 "num_base_bdevs_operational": 4, 00:16:05.247 "base_bdevs_list": [ 00:16:05.247 { 00:16:05.247 "name": "BaseBdev1", 00:16:05.247 "uuid": "3b10713d-bc5e-5bb1-aefe-ed9836d3d23f", 00:16:05.247 "is_configured": true, 00:16:05.247 "data_offset": 0, 00:16:05.247 "data_size": 65536 00:16:05.247 }, 00:16:05.247 { 00:16:05.247 "name": "BaseBdev2", 00:16:05.247 "uuid": "5e8bbed0-bbc2-5319-ab6b-d022dba43e20", 00:16:05.247 "is_configured": true, 00:16:05.247 "data_offset": 0, 00:16:05.247 "data_size": 65536 00:16:05.247 }, 00:16:05.247 { 00:16:05.247 "name": "BaseBdev3", 00:16:05.247 "uuid": "6b09d438-d88f-5de2-b9b0-627bd52980e7", 00:16:05.247 "is_configured": true, 00:16:05.247 "data_offset": 0, 00:16:05.247 "data_size": 65536 00:16:05.247 }, 00:16:05.247 { 00:16:05.247 "name": "BaseBdev4", 00:16:05.247 "uuid": "73305a37-7c11-5078-af11-3824659315e6", 00:16:05.247 "is_configured": true, 00:16:05.247 "data_offset": 0, 00:16:05.247 "data_size": 65536 00:16:05.247 } 00:16:05.247 ] 00:16:05.247 }' 00:16:05.247 13:25:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:05.247 13:25:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.507 13:25:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:05.507 13:25:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:05.507 13:25:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.507 13:25:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.507 [2024-11-17 13:25:54.722454] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:05.767 13:25:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.767 13:25:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=196608 00:16:05.767 13:25:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:05.767 13:25:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:05.767 13:25:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.767 13:25:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.767 13:25:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.767 13:25:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:16:05.767 13:25:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:16:05.767 13:25:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:16:05.767 13:25:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:16:05.767 13:25:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:16:05.767 13:25:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:05.767 13:25:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:16:05.767 13:25:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:05.767 13:25:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:05.767 13:25:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:05.767 13:25:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:16:05.767 13:25:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:05.767 13:25:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:05.767 13:25:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:16:05.767 [2024-11-17 13:25:54.977802] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:16:06.026 /dev/nbd0 00:16:06.026 13:25:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:06.026 13:25:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:06.026 13:25:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:06.026 13:25:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:16:06.026 13:25:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:06.026 13:25:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:06.027 13:25:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:06.027 13:25:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:16:06.027 13:25:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:06.027 13:25:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:06.027 13:25:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:06.027 1+0 records in 00:16:06.027 1+0 records out 00:16:06.027 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000410262 s, 10.0 MB/s 00:16:06.027 13:25:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:06.027 13:25:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:16:06.027 13:25:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:06.027 13:25:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:06.027 13:25:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:16:06.027 13:25:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:06.027 13:25:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:06.027 13:25:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:16:06.027 13:25:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:16:06.027 13:25:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 192 00:16:06.027 13:25:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=512 oflag=direct 00:16:06.596 512+0 records in 00:16:06.596 512+0 records out 00:16:06.596 100663296 bytes (101 MB, 96 MiB) copied, 0.48495 s, 208 MB/s 00:16:06.596 13:25:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:06.596 13:25:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:06.596 13:25:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:06.596 13:25:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:06.596 13:25:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:16:06.596 13:25:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:06.596 13:25:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:06.596 13:25:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:06.596 [2024-11-17 13:25:55.771366] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:06.596 13:25:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:06.596 13:25:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:06.597 13:25:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:06.597 13:25:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:06.597 13:25:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:06.597 13:25:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:16:06.597 13:25:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:16:06.597 13:25:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:06.597 13:25:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.597 13:25:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.597 [2024-11-17 13:25:55.784810] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:06.597 13:25:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.597 13:25:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:06.597 13:25:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:06.597 13:25:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:06.597 13:25:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:06.597 13:25:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:06.597 13:25:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:06.597 13:25:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:06.597 13:25:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:06.597 13:25:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:06.597 13:25:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:06.597 13:25:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:06.597 13:25:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.597 13:25:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.597 13:25:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:06.597 13:25:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.856 13:25:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:06.856 "name": "raid_bdev1", 00:16:06.856 "uuid": "2718d865-9313-45f6-b296-1f7f1b0cfead", 00:16:06.856 "strip_size_kb": 64, 00:16:06.856 "state": "online", 00:16:06.856 "raid_level": "raid5f", 00:16:06.856 "superblock": false, 00:16:06.856 "num_base_bdevs": 4, 00:16:06.856 "num_base_bdevs_discovered": 3, 00:16:06.856 "num_base_bdevs_operational": 3, 00:16:06.856 "base_bdevs_list": [ 00:16:06.856 { 00:16:06.856 "name": null, 00:16:06.856 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:06.856 "is_configured": false, 00:16:06.856 "data_offset": 0, 00:16:06.856 "data_size": 65536 00:16:06.856 }, 00:16:06.856 { 00:16:06.857 "name": "BaseBdev2", 00:16:06.857 "uuid": "5e8bbed0-bbc2-5319-ab6b-d022dba43e20", 00:16:06.857 "is_configured": true, 00:16:06.857 "data_offset": 0, 00:16:06.857 "data_size": 65536 00:16:06.857 }, 00:16:06.857 { 00:16:06.857 "name": "BaseBdev3", 00:16:06.857 "uuid": "6b09d438-d88f-5de2-b9b0-627bd52980e7", 00:16:06.857 "is_configured": true, 00:16:06.857 "data_offset": 0, 00:16:06.857 "data_size": 65536 00:16:06.857 }, 00:16:06.857 { 00:16:06.857 "name": "BaseBdev4", 00:16:06.857 "uuid": "73305a37-7c11-5078-af11-3824659315e6", 00:16:06.857 "is_configured": true, 00:16:06.857 "data_offset": 0, 00:16:06.857 "data_size": 65536 00:16:06.857 } 00:16:06.857 ] 00:16:06.857 }' 00:16:06.857 13:25:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:06.857 13:25:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.116 13:25:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:07.116 13:25:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.116 13:25:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.116 [2024-11-17 13:25:56.240006] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:07.116 [2024-11-17 13:25:56.254972] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:16:07.116 13:25:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.116 13:25:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:07.116 [2024-11-17 13:25:56.264241] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:08.056 13:25:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:08.056 13:25:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:08.056 13:25:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:08.056 13:25:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:08.056 13:25:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:08.056 13:25:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:08.056 13:25:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:08.056 13:25:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.056 13:25:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.317 13:25:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.317 13:25:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:08.317 "name": "raid_bdev1", 00:16:08.317 "uuid": "2718d865-9313-45f6-b296-1f7f1b0cfead", 00:16:08.317 "strip_size_kb": 64, 00:16:08.317 "state": "online", 00:16:08.317 "raid_level": "raid5f", 00:16:08.317 "superblock": false, 00:16:08.317 "num_base_bdevs": 4, 00:16:08.317 "num_base_bdevs_discovered": 4, 00:16:08.317 "num_base_bdevs_operational": 4, 00:16:08.317 "process": { 00:16:08.317 "type": "rebuild", 00:16:08.317 "target": "spare", 00:16:08.317 "progress": { 00:16:08.317 "blocks": 19200, 00:16:08.317 "percent": 9 00:16:08.317 } 00:16:08.317 }, 00:16:08.317 "base_bdevs_list": [ 00:16:08.317 { 00:16:08.317 "name": "spare", 00:16:08.317 "uuid": "c9767d78-e024-52e6-a9f4-c90d41be3586", 00:16:08.317 "is_configured": true, 00:16:08.317 "data_offset": 0, 00:16:08.317 "data_size": 65536 00:16:08.317 }, 00:16:08.317 { 00:16:08.317 "name": "BaseBdev2", 00:16:08.317 "uuid": "5e8bbed0-bbc2-5319-ab6b-d022dba43e20", 00:16:08.317 "is_configured": true, 00:16:08.317 "data_offset": 0, 00:16:08.317 "data_size": 65536 00:16:08.317 }, 00:16:08.317 { 00:16:08.317 "name": "BaseBdev3", 00:16:08.317 "uuid": "6b09d438-d88f-5de2-b9b0-627bd52980e7", 00:16:08.317 "is_configured": true, 00:16:08.317 "data_offset": 0, 00:16:08.317 "data_size": 65536 00:16:08.317 }, 00:16:08.317 { 00:16:08.317 "name": "BaseBdev4", 00:16:08.317 "uuid": "73305a37-7c11-5078-af11-3824659315e6", 00:16:08.317 "is_configured": true, 00:16:08.317 "data_offset": 0, 00:16:08.317 "data_size": 65536 00:16:08.317 } 00:16:08.317 ] 00:16:08.317 }' 00:16:08.317 13:25:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:08.317 13:25:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:08.317 13:25:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:08.317 13:25:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:08.317 13:25:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:08.317 13:25:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.317 13:25:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.317 [2024-11-17 13:25:57.414825] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:08.317 [2024-11-17 13:25:57.470158] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:08.317 [2024-11-17 13:25:57.470261] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:08.317 [2024-11-17 13:25:57.470279] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:08.317 [2024-11-17 13:25:57.470289] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:08.317 13:25:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.317 13:25:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:08.317 13:25:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:08.317 13:25:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:08.317 13:25:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:08.317 13:25:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:08.317 13:25:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:08.317 13:25:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:08.317 13:25:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:08.317 13:25:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:08.317 13:25:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:08.317 13:25:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:08.317 13:25:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:08.317 13:25:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.317 13:25:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.317 13:25:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.577 13:25:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:08.577 "name": "raid_bdev1", 00:16:08.577 "uuid": "2718d865-9313-45f6-b296-1f7f1b0cfead", 00:16:08.577 "strip_size_kb": 64, 00:16:08.577 "state": "online", 00:16:08.577 "raid_level": "raid5f", 00:16:08.577 "superblock": false, 00:16:08.577 "num_base_bdevs": 4, 00:16:08.577 "num_base_bdevs_discovered": 3, 00:16:08.577 "num_base_bdevs_operational": 3, 00:16:08.577 "base_bdevs_list": [ 00:16:08.577 { 00:16:08.577 "name": null, 00:16:08.577 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:08.577 "is_configured": false, 00:16:08.577 "data_offset": 0, 00:16:08.577 "data_size": 65536 00:16:08.577 }, 00:16:08.577 { 00:16:08.577 "name": "BaseBdev2", 00:16:08.577 "uuid": "5e8bbed0-bbc2-5319-ab6b-d022dba43e20", 00:16:08.577 "is_configured": true, 00:16:08.577 "data_offset": 0, 00:16:08.577 "data_size": 65536 00:16:08.577 }, 00:16:08.577 { 00:16:08.577 "name": "BaseBdev3", 00:16:08.577 "uuid": "6b09d438-d88f-5de2-b9b0-627bd52980e7", 00:16:08.577 "is_configured": true, 00:16:08.577 "data_offset": 0, 00:16:08.577 "data_size": 65536 00:16:08.577 }, 00:16:08.577 { 00:16:08.577 "name": "BaseBdev4", 00:16:08.577 "uuid": "73305a37-7c11-5078-af11-3824659315e6", 00:16:08.577 "is_configured": true, 00:16:08.577 "data_offset": 0, 00:16:08.577 "data_size": 65536 00:16:08.577 } 00:16:08.577 ] 00:16:08.577 }' 00:16:08.577 13:25:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:08.577 13:25:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.837 13:25:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:08.837 13:25:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:08.837 13:25:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:08.837 13:25:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:08.837 13:25:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:08.837 13:25:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:08.837 13:25:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.837 13:25:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:08.837 13:25:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.837 13:25:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.837 13:25:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:08.837 "name": "raid_bdev1", 00:16:08.837 "uuid": "2718d865-9313-45f6-b296-1f7f1b0cfead", 00:16:08.837 "strip_size_kb": 64, 00:16:08.837 "state": "online", 00:16:08.837 "raid_level": "raid5f", 00:16:08.837 "superblock": false, 00:16:08.837 "num_base_bdevs": 4, 00:16:08.837 "num_base_bdevs_discovered": 3, 00:16:08.837 "num_base_bdevs_operational": 3, 00:16:08.837 "base_bdevs_list": [ 00:16:08.837 { 00:16:08.837 "name": null, 00:16:08.837 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:08.837 "is_configured": false, 00:16:08.837 "data_offset": 0, 00:16:08.837 "data_size": 65536 00:16:08.837 }, 00:16:08.837 { 00:16:08.837 "name": "BaseBdev2", 00:16:08.837 "uuid": "5e8bbed0-bbc2-5319-ab6b-d022dba43e20", 00:16:08.837 "is_configured": true, 00:16:08.837 "data_offset": 0, 00:16:08.837 "data_size": 65536 00:16:08.837 }, 00:16:08.837 { 00:16:08.837 "name": "BaseBdev3", 00:16:08.837 "uuid": "6b09d438-d88f-5de2-b9b0-627bd52980e7", 00:16:08.837 "is_configured": true, 00:16:08.837 "data_offset": 0, 00:16:08.837 "data_size": 65536 00:16:08.837 }, 00:16:08.837 { 00:16:08.837 "name": "BaseBdev4", 00:16:08.837 "uuid": "73305a37-7c11-5078-af11-3824659315e6", 00:16:08.837 "is_configured": true, 00:16:08.837 "data_offset": 0, 00:16:08.837 "data_size": 65536 00:16:08.837 } 00:16:08.837 ] 00:16:08.837 }' 00:16:08.837 13:25:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:08.837 13:25:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:08.837 13:25:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:08.837 13:25:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:08.837 13:25:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:08.837 13:25:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.837 13:25:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.837 [2024-11-17 13:25:58.059033] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:09.097 [2024-11-17 13:25:58.074990] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b820 00:16:09.097 13:25:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.097 13:25:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:09.097 [2024-11-17 13:25:58.084457] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:10.037 13:25:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:10.037 13:25:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:10.037 13:25:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:10.037 13:25:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:10.037 13:25:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:10.037 13:25:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:10.037 13:25:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:10.037 13:25:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.037 13:25:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.037 13:25:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.037 13:25:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:10.037 "name": "raid_bdev1", 00:16:10.037 "uuid": "2718d865-9313-45f6-b296-1f7f1b0cfead", 00:16:10.037 "strip_size_kb": 64, 00:16:10.037 "state": "online", 00:16:10.037 "raid_level": "raid5f", 00:16:10.037 "superblock": false, 00:16:10.037 "num_base_bdevs": 4, 00:16:10.037 "num_base_bdevs_discovered": 4, 00:16:10.037 "num_base_bdevs_operational": 4, 00:16:10.037 "process": { 00:16:10.037 "type": "rebuild", 00:16:10.037 "target": "spare", 00:16:10.037 "progress": { 00:16:10.037 "blocks": 19200, 00:16:10.037 "percent": 9 00:16:10.037 } 00:16:10.037 }, 00:16:10.037 "base_bdevs_list": [ 00:16:10.037 { 00:16:10.037 "name": "spare", 00:16:10.037 "uuid": "c9767d78-e024-52e6-a9f4-c90d41be3586", 00:16:10.037 "is_configured": true, 00:16:10.037 "data_offset": 0, 00:16:10.037 "data_size": 65536 00:16:10.037 }, 00:16:10.037 { 00:16:10.037 "name": "BaseBdev2", 00:16:10.037 "uuid": "5e8bbed0-bbc2-5319-ab6b-d022dba43e20", 00:16:10.037 "is_configured": true, 00:16:10.037 "data_offset": 0, 00:16:10.037 "data_size": 65536 00:16:10.037 }, 00:16:10.037 { 00:16:10.037 "name": "BaseBdev3", 00:16:10.037 "uuid": "6b09d438-d88f-5de2-b9b0-627bd52980e7", 00:16:10.037 "is_configured": true, 00:16:10.037 "data_offset": 0, 00:16:10.037 "data_size": 65536 00:16:10.037 }, 00:16:10.037 { 00:16:10.037 "name": "BaseBdev4", 00:16:10.037 "uuid": "73305a37-7c11-5078-af11-3824659315e6", 00:16:10.037 "is_configured": true, 00:16:10.038 "data_offset": 0, 00:16:10.038 "data_size": 65536 00:16:10.038 } 00:16:10.038 ] 00:16:10.038 }' 00:16:10.038 13:25:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:10.038 13:25:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:10.038 13:25:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:10.038 13:25:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:10.038 13:25:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:16:10.038 13:25:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:16:10.038 13:25:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:16:10.038 13:25:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=609 00:16:10.038 13:25:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:10.038 13:25:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:10.038 13:25:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:10.038 13:25:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:10.038 13:25:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:10.038 13:25:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:10.038 13:25:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:10.038 13:25:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:10.038 13:25:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.038 13:25:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.038 13:25:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.298 13:25:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:10.298 "name": "raid_bdev1", 00:16:10.298 "uuid": "2718d865-9313-45f6-b296-1f7f1b0cfead", 00:16:10.298 "strip_size_kb": 64, 00:16:10.298 "state": "online", 00:16:10.298 "raid_level": "raid5f", 00:16:10.298 "superblock": false, 00:16:10.298 "num_base_bdevs": 4, 00:16:10.298 "num_base_bdevs_discovered": 4, 00:16:10.298 "num_base_bdevs_operational": 4, 00:16:10.298 "process": { 00:16:10.298 "type": "rebuild", 00:16:10.298 "target": "spare", 00:16:10.298 "progress": { 00:16:10.298 "blocks": 21120, 00:16:10.298 "percent": 10 00:16:10.298 } 00:16:10.298 }, 00:16:10.298 "base_bdevs_list": [ 00:16:10.299 { 00:16:10.299 "name": "spare", 00:16:10.299 "uuid": "c9767d78-e024-52e6-a9f4-c90d41be3586", 00:16:10.299 "is_configured": true, 00:16:10.299 "data_offset": 0, 00:16:10.299 "data_size": 65536 00:16:10.299 }, 00:16:10.299 { 00:16:10.299 "name": "BaseBdev2", 00:16:10.299 "uuid": "5e8bbed0-bbc2-5319-ab6b-d022dba43e20", 00:16:10.299 "is_configured": true, 00:16:10.299 "data_offset": 0, 00:16:10.299 "data_size": 65536 00:16:10.299 }, 00:16:10.299 { 00:16:10.299 "name": "BaseBdev3", 00:16:10.299 "uuid": "6b09d438-d88f-5de2-b9b0-627bd52980e7", 00:16:10.299 "is_configured": true, 00:16:10.299 "data_offset": 0, 00:16:10.299 "data_size": 65536 00:16:10.299 }, 00:16:10.299 { 00:16:10.299 "name": "BaseBdev4", 00:16:10.299 "uuid": "73305a37-7c11-5078-af11-3824659315e6", 00:16:10.299 "is_configured": true, 00:16:10.299 "data_offset": 0, 00:16:10.299 "data_size": 65536 00:16:10.299 } 00:16:10.299 ] 00:16:10.299 }' 00:16:10.299 13:25:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:10.299 13:25:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:10.299 13:25:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:10.299 13:25:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:10.299 13:25:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:11.236 13:26:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:11.236 13:26:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:11.236 13:26:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:11.236 13:26:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:11.236 13:26:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:11.236 13:26:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:11.236 13:26:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:11.236 13:26:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:11.236 13:26:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.236 13:26:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.236 13:26:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.236 13:26:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:11.236 "name": "raid_bdev1", 00:16:11.236 "uuid": "2718d865-9313-45f6-b296-1f7f1b0cfead", 00:16:11.236 "strip_size_kb": 64, 00:16:11.236 "state": "online", 00:16:11.236 "raid_level": "raid5f", 00:16:11.236 "superblock": false, 00:16:11.236 "num_base_bdevs": 4, 00:16:11.236 "num_base_bdevs_discovered": 4, 00:16:11.236 "num_base_bdevs_operational": 4, 00:16:11.236 "process": { 00:16:11.236 "type": "rebuild", 00:16:11.236 "target": "spare", 00:16:11.236 "progress": { 00:16:11.236 "blocks": 42240, 00:16:11.236 "percent": 21 00:16:11.236 } 00:16:11.236 }, 00:16:11.236 "base_bdevs_list": [ 00:16:11.236 { 00:16:11.236 "name": "spare", 00:16:11.236 "uuid": "c9767d78-e024-52e6-a9f4-c90d41be3586", 00:16:11.236 "is_configured": true, 00:16:11.236 "data_offset": 0, 00:16:11.236 "data_size": 65536 00:16:11.236 }, 00:16:11.236 { 00:16:11.236 "name": "BaseBdev2", 00:16:11.236 "uuid": "5e8bbed0-bbc2-5319-ab6b-d022dba43e20", 00:16:11.236 "is_configured": true, 00:16:11.236 "data_offset": 0, 00:16:11.236 "data_size": 65536 00:16:11.236 }, 00:16:11.236 { 00:16:11.236 "name": "BaseBdev3", 00:16:11.236 "uuid": "6b09d438-d88f-5de2-b9b0-627bd52980e7", 00:16:11.236 "is_configured": true, 00:16:11.236 "data_offset": 0, 00:16:11.236 "data_size": 65536 00:16:11.236 }, 00:16:11.236 { 00:16:11.236 "name": "BaseBdev4", 00:16:11.236 "uuid": "73305a37-7c11-5078-af11-3824659315e6", 00:16:11.236 "is_configured": true, 00:16:11.236 "data_offset": 0, 00:16:11.236 "data_size": 65536 00:16:11.236 } 00:16:11.236 ] 00:16:11.236 }' 00:16:11.236 13:26:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:11.236 13:26:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:11.236 13:26:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:11.495 13:26:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:11.495 13:26:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:12.430 13:26:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:12.430 13:26:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:12.430 13:26:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:12.430 13:26:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:12.430 13:26:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:12.430 13:26:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:12.430 13:26:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:12.430 13:26:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:12.430 13:26:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.430 13:26:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.430 13:26:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.430 13:26:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:12.430 "name": "raid_bdev1", 00:16:12.430 "uuid": "2718d865-9313-45f6-b296-1f7f1b0cfead", 00:16:12.430 "strip_size_kb": 64, 00:16:12.430 "state": "online", 00:16:12.430 "raid_level": "raid5f", 00:16:12.430 "superblock": false, 00:16:12.430 "num_base_bdevs": 4, 00:16:12.430 "num_base_bdevs_discovered": 4, 00:16:12.430 "num_base_bdevs_operational": 4, 00:16:12.430 "process": { 00:16:12.430 "type": "rebuild", 00:16:12.430 "target": "spare", 00:16:12.430 "progress": { 00:16:12.430 "blocks": 63360, 00:16:12.430 "percent": 32 00:16:12.430 } 00:16:12.430 }, 00:16:12.430 "base_bdevs_list": [ 00:16:12.430 { 00:16:12.430 "name": "spare", 00:16:12.430 "uuid": "c9767d78-e024-52e6-a9f4-c90d41be3586", 00:16:12.430 "is_configured": true, 00:16:12.430 "data_offset": 0, 00:16:12.430 "data_size": 65536 00:16:12.430 }, 00:16:12.430 { 00:16:12.430 "name": "BaseBdev2", 00:16:12.430 "uuid": "5e8bbed0-bbc2-5319-ab6b-d022dba43e20", 00:16:12.430 "is_configured": true, 00:16:12.430 "data_offset": 0, 00:16:12.430 "data_size": 65536 00:16:12.430 }, 00:16:12.430 { 00:16:12.430 "name": "BaseBdev3", 00:16:12.430 "uuid": "6b09d438-d88f-5de2-b9b0-627bd52980e7", 00:16:12.430 "is_configured": true, 00:16:12.430 "data_offset": 0, 00:16:12.430 "data_size": 65536 00:16:12.430 }, 00:16:12.430 { 00:16:12.430 "name": "BaseBdev4", 00:16:12.430 "uuid": "73305a37-7c11-5078-af11-3824659315e6", 00:16:12.430 "is_configured": true, 00:16:12.430 "data_offset": 0, 00:16:12.430 "data_size": 65536 00:16:12.430 } 00:16:12.430 ] 00:16:12.431 }' 00:16:12.431 13:26:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:12.431 13:26:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:12.431 13:26:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:12.431 13:26:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:12.431 13:26:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:13.808 13:26:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:13.808 13:26:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:13.808 13:26:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:13.808 13:26:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:13.808 13:26:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:13.808 13:26:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:13.808 13:26:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:13.808 13:26:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:13.808 13:26:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.808 13:26:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.808 13:26:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.808 13:26:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:13.808 "name": "raid_bdev1", 00:16:13.808 "uuid": "2718d865-9313-45f6-b296-1f7f1b0cfead", 00:16:13.808 "strip_size_kb": 64, 00:16:13.808 "state": "online", 00:16:13.808 "raid_level": "raid5f", 00:16:13.808 "superblock": false, 00:16:13.808 "num_base_bdevs": 4, 00:16:13.808 "num_base_bdevs_discovered": 4, 00:16:13.808 "num_base_bdevs_operational": 4, 00:16:13.808 "process": { 00:16:13.808 "type": "rebuild", 00:16:13.808 "target": "spare", 00:16:13.808 "progress": { 00:16:13.808 "blocks": 86400, 00:16:13.808 "percent": 43 00:16:13.808 } 00:16:13.808 }, 00:16:13.808 "base_bdevs_list": [ 00:16:13.808 { 00:16:13.808 "name": "spare", 00:16:13.808 "uuid": "c9767d78-e024-52e6-a9f4-c90d41be3586", 00:16:13.808 "is_configured": true, 00:16:13.808 "data_offset": 0, 00:16:13.808 "data_size": 65536 00:16:13.808 }, 00:16:13.808 { 00:16:13.808 "name": "BaseBdev2", 00:16:13.808 "uuid": "5e8bbed0-bbc2-5319-ab6b-d022dba43e20", 00:16:13.808 "is_configured": true, 00:16:13.808 "data_offset": 0, 00:16:13.808 "data_size": 65536 00:16:13.808 }, 00:16:13.808 { 00:16:13.808 "name": "BaseBdev3", 00:16:13.808 "uuid": "6b09d438-d88f-5de2-b9b0-627bd52980e7", 00:16:13.808 "is_configured": true, 00:16:13.808 "data_offset": 0, 00:16:13.808 "data_size": 65536 00:16:13.808 }, 00:16:13.808 { 00:16:13.808 "name": "BaseBdev4", 00:16:13.808 "uuid": "73305a37-7c11-5078-af11-3824659315e6", 00:16:13.808 "is_configured": true, 00:16:13.808 "data_offset": 0, 00:16:13.808 "data_size": 65536 00:16:13.808 } 00:16:13.808 ] 00:16:13.808 }' 00:16:13.808 13:26:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:13.808 13:26:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:13.808 13:26:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:13.808 13:26:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:13.808 13:26:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:14.748 13:26:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:14.748 13:26:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:14.748 13:26:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:14.748 13:26:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:14.748 13:26:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:14.748 13:26:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:14.748 13:26:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:14.748 13:26:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:14.748 13:26:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.748 13:26:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.748 13:26:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.748 13:26:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:14.748 "name": "raid_bdev1", 00:16:14.748 "uuid": "2718d865-9313-45f6-b296-1f7f1b0cfead", 00:16:14.748 "strip_size_kb": 64, 00:16:14.748 "state": "online", 00:16:14.748 "raid_level": "raid5f", 00:16:14.748 "superblock": false, 00:16:14.748 "num_base_bdevs": 4, 00:16:14.748 "num_base_bdevs_discovered": 4, 00:16:14.748 "num_base_bdevs_operational": 4, 00:16:14.748 "process": { 00:16:14.748 "type": "rebuild", 00:16:14.748 "target": "spare", 00:16:14.748 "progress": { 00:16:14.748 "blocks": 107520, 00:16:14.748 "percent": 54 00:16:14.748 } 00:16:14.748 }, 00:16:14.748 "base_bdevs_list": [ 00:16:14.748 { 00:16:14.748 "name": "spare", 00:16:14.748 "uuid": "c9767d78-e024-52e6-a9f4-c90d41be3586", 00:16:14.748 "is_configured": true, 00:16:14.748 "data_offset": 0, 00:16:14.748 "data_size": 65536 00:16:14.748 }, 00:16:14.748 { 00:16:14.748 "name": "BaseBdev2", 00:16:14.748 "uuid": "5e8bbed0-bbc2-5319-ab6b-d022dba43e20", 00:16:14.748 "is_configured": true, 00:16:14.748 "data_offset": 0, 00:16:14.748 "data_size": 65536 00:16:14.748 }, 00:16:14.748 { 00:16:14.748 "name": "BaseBdev3", 00:16:14.748 "uuid": "6b09d438-d88f-5de2-b9b0-627bd52980e7", 00:16:14.748 "is_configured": true, 00:16:14.748 "data_offset": 0, 00:16:14.748 "data_size": 65536 00:16:14.748 }, 00:16:14.748 { 00:16:14.748 "name": "BaseBdev4", 00:16:14.748 "uuid": "73305a37-7c11-5078-af11-3824659315e6", 00:16:14.748 "is_configured": true, 00:16:14.748 "data_offset": 0, 00:16:14.748 "data_size": 65536 00:16:14.748 } 00:16:14.748 ] 00:16:14.748 }' 00:16:14.748 13:26:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:14.748 13:26:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:14.748 13:26:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:14.748 13:26:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:14.748 13:26:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:16.130 13:26:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:16.130 13:26:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:16.130 13:26:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:16.130 13:26:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:16.130 13:26:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:16.130 13:26:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:16.130 13:26:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:16.130 13:26:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:16.130 13:26:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.130 13:26:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.130 13:26:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.130 13:26:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:16.130 "name": "raid_bdev1", 00:16:16.130 "uuid": "2718d865-9313-45f6-b296-1f7f1b0cfead", 00:16:16.130 "strip_size_kb": 64, 00:16:16.130 "state": "online", 00:16:16.130 "raid_level": "raid5f", 00:16:16.130 "superblock": false, 00:16:16.130 "num_base_bdevs": 4, 00:16:16.130 "num_base_bdevs_discovered": 4, 00:16:16.130 "num_base_bdevs_operational": 4, 00:16:16.130 "process": { 00:16:16.130 "type": "rebuild", 00:16:16.130 "target": "spare", 00:16:16.130 "progress": { 00:16:16.130 "blocks": 130560, 00:16:16.130 "percent": 66 00:16:16.130 } 00:16:16.130 }, 00:16:16.130 "base_bdevs_list": [ 00:16:16.130 { 00:16:16.130 "name": "spare", 00:16:16.130 "uuid": "c9767d78-e024-52e6-a9f4-c90d41be3586", 00:16:16.130 "is_configured": true, 00:16:16.130 "data_offset": 0, 00:16:16.130 "data_size": 65536 00:16:16.130 }, 00:16:16.130 { 00:16:16.130 "name": "BaseBdev2", 00:16:16.130 "uuid": "5e8bbed0-bbc2-5319-ab6b-d022dba43e20", 00:16:16.130 "is_configured": true, 00:16:16.130 "data_offset": 0, 00:16:16.130 "data_size": 65536 00:16:16.130 }, 00:16:16.130 { 00:16:16.130 "name": "BaseBdev3", 00:16:16.130 "uuid": "6b09d438-d88f-5de2-b9b0-627bd52980e7", 00:16:16.130 "is_configured": true, 00:16:16.131 "data_offset": 0, 00:16:16.131 "data_size": 65536 00:16:16.131 }, 00:16:16.131 { 00:16:16.131 "name": "BaseBdev4", 00:16:16.131 "uuid": "73305a37-7c11-5078-af11-3824659315e6", 00:16:16.131 "is_configured": true, 00:16:16.131 "data_offset": 0, 00:16:16.131 "data_size": 65536 00:16:16.131 } 00:16:16.131 ] 00:16:16.131 }' 00:16:16.131 13:26:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:16.131 13:26:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:16.131 13:26:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:16.131 13:26:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:16.131 13:26:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:17.070 13:26:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:17.070 13:26:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:17.070 13:26:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:17.070 13:26:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:17.070 13:26:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:17.070 13:26:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:17.070 13:26:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:17.070 13:26:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:17.070 13:26:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.070 13:26:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.070 13:26:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.070 13:26:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:17.070 "name": "raid_bdev1", 00:16:17.070 "uuid": "2718d865-9313-45f6-b296-1f7f1b0cfead", 00:16:17.070 "strip_size_kb": 64, 00:16:17.070 "state": "online", 00:16:17.070 "raid_level": "raid5f", 00:16:17.070 "superblock": false, 00:16:17.070 "num_base_bdevs": 4, 00:16:17.070 "num_base_bdevs_discovered": 4, 00:16:17.070 "num_base_bdevs_operational": 4, 00:16:17.070 "process": { 00:16:17.070 "type": "rebuild", 00:16:17.070 "target": "spare", 00:16:17.070 "progress": { 00:16:17.070 "blocks": 151680, 00:16:17.070 "percent": 77 00:16:17.070 } 00:16:17.070 }, 00:16:17.070 "base_bdevs_list": [ 00:16:17.070 { 00:16:17.070 "name": "spare", 00:16:17.070 "uuid": "c9767d78-e024-52e6-a9f4-c90d41be3586", 00:16:17.070 "is_configured": true, 00:16:17.070 "data_offset": 0, 00:16:17.070 "data_size": 65536 00:16:17.071 }, 00:16:17.071 { 00:16:17.071 "name": "BaseBdev2", 00:16:17.071 "uuid": "5e8bbed0-bbc2-5319-ab6b-d022dba43e20", 00:16:17.071 "is_configured": true, 00:16:17.071 "data_offset": 0, 00:16:17.071 "data_size": 65536 00:16:17.071 }, 00:16:17.071 { 00:16:17.071 "name": "BaseBdev3", 00:16:17.071 "uuid": "6b09d438-d88f-5de2-b9b0-627bd52980e7", 00:16:17.071 "is_configured": true, 00:16:17.071 "data_offset": 0, 00:16:17.071 "data_size": 65536 00:16:17.071 }, 00:16:17.071 { 00:16:17.071 "name": "BaseBdev4", 00:16:17.071 "uuid": "73305a37-7c11-5078-af11-3824659315e6", 00:16:17.071 "is_configured": true, 00:16:17.071 "data_offset": 0, 00:16:17.071 "data_size": 65536 00:16:17.071 } 00:16:17.071 ] 00:16:17.071 }' 00:16:17.071 13:26:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:17.071 13:26:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:17.071 13:26:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:17.071 13:26:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:17.071 13:26:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:18.010 13:26:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:18.010 13:26:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:18.010 13:26:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:18.010 13:26:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:18.010 13:26:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:18.010 13:26:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:18.271 13:26:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:18.271 13:26:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:18.271 13:26:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.271 13:26:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.271 13:26:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.271 13:26:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:18.271 "name": "raid_bdev1", 00:16:18.271 "uuid": "2718d865-9313-45f6-b296-1f7f1b0cfead", 00:16:18.271 "strip_size_kb": 64, 00:16:18.271 "state": "online", 00:16:18.271 "raid_level": "raid5f", 00:16:18.271 "superblock": false, 00:16:18.271 "num_base_bdevs": 4, 00:16:18.271 "num_base_bdevs_discovered": 4, 00:16:18.271 "num_base_bdevs_operational": 4, 00:16:18.271 "process": { 00:16:18.271 "type": "rebuild", 00:16:18.271 "target": "spare", 00:16:18.271 "progress": { 00:16:18.271 "blocks": 174720, 00:16:18.271 "percent": 88 00:16:18.271 } 00:16:18.271 }, 00:16:18.271 "base_bdevs_list": [ 00:16:18.271 { 00:16:18.271 "name": "spare", 00:16:18.271 "uuid": "c9767d78-e024-52e6-a9f4-c90d41be3586", 00:16:18.271 "is_configured": true, 00:16:18.271 "data_offset": 0, 00:16:18.271 "data_size": 65536 00:16:18.271 }, 00:16:18.271 { 00:16:18.271 "name": "BaseBdev2", 00:16:18.271 "uuid": "5e8bbed0-bbc2-5319-ab6b-d022dba43e20", 00:16:18.271 "is_configured": true, 00:16:18.271 "data_offset": 0, 00:16:18.271 "data_size": 65536 00:16:18.271 }, 00:16:18.271 { 00:16:18.271 "name": "BaseBdev3", 00:16:18.271 "uuid": "6b09d438-d88f-5de2-b9b0-627bd52980e7", 00:16:18.271 "is_configured": true, 00:16:18.271 "data_offset": 0, 00:16:18.271 "data_size": 65536 00:16:18.271 }, 00:16:18.271 { 00:16:18.271 "name": "BaseBdev4", 00:16:18.271 "uuid": "73305a37-7c11-5078-af11-3824659315e6", 00:16:18.271 "is_configured": true, 00:16:18.271 "data_offset": 0, 00:16:18.271 "data_size": 65536 00:16:18.271 } 00:16:18.271 ] 00:16:18.271 }' 00:16:18.271 13:26:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:18.271 13:26:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:18.271 13:26:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:18.271 13:26:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:18.271 13:26:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:19.210 13:26:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:19.210 13:26:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:19.210 13:26:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:19.210 13:26:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:19.210 13:26:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:19.210 13:26:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:19.210 13:26:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:19.210 13:26:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:19.210 13:26:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.210 13:26:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.210 13:26:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.470 13:26:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:19.470 "name": "raid_bdev1", 00:16:19.470 "uuid": "2718d865-9313-45f6-b296-1f7f1b0cfead", 00:16:19.470 "strip_size_kb": 64, 00:16:19.470 "state": "online", 00:16:19.470 "raid_level": "raid5f", 00:16:19.470 "superblock": false, 00:16:19.470 "num_base_bdevs": 4, 00:16:19.470 "num_base_bdevs_discovered": 4, 00:16:19.470 "num_base_bdevs_operational": 4, 00:16:19.470 "process": { 00:16:19.470 "type": "rebuild", 00:16:19.470 "target": "spare", 00:16:19.470 "progress": { 00:16:19.470 "blocks": 195840, 00:16:19.470 "percent": 99 00:16:19.470 } 00:16:19.470 }, 00:16:19.470 "base_bdevs_list": [ 00:16:19.470 { 00:16:19.470 "name": "spare", 00:16:19.470 "uuid": "c9767d78-e024-52e6-a9f4-c90d41be3586", 00:16:19.470 "is_configured": true, 00:16:19.470 "data_offset": 0, 00:16:19.470 "data_size": 65536 00:16:19.470 }, 00:16:19.470 { 00:16:19.470 "name": "BaseBdev2", 00:16:19.470 "uuid": "5e8bbed0-bbc2-5319-ab6b-d022dba43e20", 00:16:19.470 "is_configured": true, 00:16:19.470 "data_offset": 0, 00:16:19.470 "data_size": 65536 00:16:19.470 }, 00:16:19.470 { 00:16:19.470 "name": "BaseBdev3", 00:16:19.470 "uuid": "6b09d438-d88f-5de2-b9b0-627bd52980e7", 00:16:19.470 "is_configured": true, 00:16:19.470 "data_offset": 0, 00:16:19.470 "data_size": 65536 00:16:19.470 }, 00:16:19.470 { 00:16:19.470 "name": "BaseBdev4", 00:16:19.470 "uuid": "73305a37-7c11-5078-af11-3824659315e6", 00:16:19.470 "is_configured": true, 00:16:19.470 "data_offset": 0, 00:16:19.470 "data_size": 65536 00:16:19.470 } 00:16:19.470 ] 00:16:19.470 }' 00:16:19.470 13:26:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:19.470 [2024-11-17 13:26:08.442463] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:19.470 [2024-11-17 13:26:08.442592] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:19.470 [2024-11-17 13:26:08.442661] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:19.470 13:26:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:19.470 13:26:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:19.470 13:26:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:19.470 13:26:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:20.409 13:26:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:20.409 13:26:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:20.409 13:26:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:20.409 13:26:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:20.409 13:26:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:20.409 13:26:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:20.409 13:26:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:20.409 13:26:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.409 13:26:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:20.409 13:26:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.409 13:26:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.409 13:26:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:20.409 "name": "raid_bdev1", 00:16:20.409 "uuid": "2718d865-9313-45f6-b296-1f7f1b0cfead", 00:16:20.409 "strip_size_kb": 64, 00:16:20.409 "state": "online", 00:16:20.409 "raid_level": "raid5f", 00:16:20.409 "superblock": false, 00:16:20.409 "num_base_bdevs": 4, 00:16:20.409 "num_base_bdevs_discovered": 4, 00:16:20.409 "num_base_bdevs_operational": 4, 00:16:20.409 "base_bdevs_list": [ 00:16:20.409 { 00:16:20.409 "name": "spare", 00:16:20.409 "uuid": "c9767d78-e024-52e6-a9f4-c90d41be3586", 00:16:20.409 "is_configured": true, 00:16:20.409 "data_offset": 0, 00:16:20.409 "data_size": 65536 00:16:20.409 }, 00:16:20.409 { 00:16:20.409 "name": "BaseBdev2", 00:16:20.409 "uuid": "5e8bbed0-bbc2-5319-ab6b-d022dba43e20", 00:16:20.409 "is_configured": true, 00:16:20.409 "data_offset": 0, 00:16:20.409 "data_size": 65536 00:16:20.409 }, 00:16:20.409 { 00:16:20.409 "name": "BaseBdev3", 00:16:20.409 "uuid": "6b09d438-d88f-5de2-b9b0-627bd52980e7", 00:16:20.409 "is_configured": true, 00:16:20.409 "data_offset": 0, 00:16:20.409 "data_size": 65536 00:16:20.409 }, 00:16:20.409 { 00:16:20.409 "name": "BaseBdev4", 00:16:20.409 "uuid": "73305a37-7c11-5078-af11-3824659315e6", 00:16:20.409 "is_configured": true, 00:16:20.409 "data_offset": 0, 00:16:20.409 "data_size": 65536 00:16:20.409 } 00:16:20.409 ] 00:16:20.409 }' 00:16:20.409 13:26:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:20.670 13:26:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:20.670 13:26:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:20.670 13:26:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:20.670 13:26:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:16:20.670 13:26:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:20.670 13:26:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:20.670 13:26:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:20.670 13:26:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:20.670 13:26:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:20.670 13:26:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:20.670 13:26:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:20.670 13:26:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.670 13:26:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.670 13:26:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.670 13:26:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:20.670 "name": "raid_bdev1", 00:16:20.670 "uuid": "2718d865-9313-45f6-b296-1f7f1b0cfead", 00:16:20.670 "strip_size_kb": 64, 00:16:20.670 "state": "online", 00:16:20.670 "raid_level": "raid5f", 00:16:20.670 "superblock": false, 00:16:20.670 "num_base_bdevs": 4, 00:16:20.670 "num_base_bdevs_discovered": 4, 00:16:20.670 "num_base_bdevs_operational": 4, 00:16:20.670 "base_bdevs_list": [ 00:16:20.670 { 00:16:20.670 "name": "spare", 00:16:20.670 "uuid": "c9767d78-e024-52e6-a9f4-c90d41be3586", 00:16:20.670 "is_configured": true, 00:16:20.670 "data_offset": 0, 00:16:20.670 "data_size": 65536 00:16:20.670 }, 00:16:20.670 { 00:16:20.670 "name": "BaseBdev2", 00:16:20.670 "uuid": "5e8bbed0-bbc2-5319-ab6b-d022dba43e20", 00:16:20.670 "is_configured": true, 00:16:20.670 "data_offset": 0, 00:16:20.670 "data_size": 65536 00:16:20.670 }, 00:16:20.670 { 00:16:20.670 "name": "BaseBdev3", 00:16:20.670 "uuid": "6b09d438-d88f-5de2-b9b0-627bd52980e7", 00:16:20.670 "is_configured": true, 00:16:20.670 "data_offset": 0, 00:16:20.670 "data_size": 65536 00:16:20.670 }, 00:16:20.670 { 00:16:20.670 "name": "BaseBdev4", 00:16:20.670 "uuid": "73305a37-7c11-5078-af11-3824659315e6", 00:16:20.670 "is_configured": true, 00:16:20.670 "data_offset": 0, 00:16:20.670 "data_size": 65536 00:16:20.670 } 00:16:20.670 ] 00:16:20.670 }' 00:16:20.670 13:26:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:20.670 13:26:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:20.670 13:26:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:20.670 13:26:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:20.670 13:26:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:16:20.670 13:26:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:20.670 13:26:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:20.670 13:26:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:20.670 13:26:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:20.670 13:26:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:20.670 13:26:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:20.670 13:26:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:20.670 13:26:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:20.670 13:26:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:20.670 13:26:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:20.670 13:26:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:20.670 13:26:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.670 13:26:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.670 13:26:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.670 13:26:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:20.670 "name": "raid_bdev1", 00:16:20.670 "uuid": "2718d865-9313-45f6-b296-1f7f1b0cfead", 00:16:20.670 "strip_size_kb": 64, 00:16:20.670 "state": "online", 00:16:20.670 "raid_level": "raid5f", 00:16:20.670 "superblock": false, 00:16:20.670 "num_base_bdevs": 4, 00:16:20.670 "num_base_bdevs_discovered": 4, 00:16:20.670 "num_base_bdevs_operational": 4, 00:16:20.670 "base_bdevs_list": [ 00:16:20.670 { 00:16:20.670 "name": "spare", 00:16:20.670 "uuid": "c9767d78-e024-52e6-a9f4-c90d41be3586", 00:16:20.670 "is_configured": true, 00:16:20.670 "data_offset": 0, 00:16:20.670 "data_size": 65536 00:16:20.670 }, 00:16:20.670 { 00:16:20.670 "name": "BaseBdev2", 00:16:20.670 "uuid": "5e8bbed0-bbc2-5319-ab6b-d022dba43e20", 00:16:20.670 "is_configured": true, 00:16:20.670 "data_offset": 0, 00:16:20.670 "data_size": 65536 00:16:20.670 }, 00:16:20.670 { 00:16:20.670 "name": "BaseBdev3", 00:16:20.670 "uuid": "6b09d438-d88f-5de2-b9b0-627bd52980e7", 00:16:20.670 "is_configured": true, 00:16:20.670 "data_offset": 0, 00:16:20.670 "data_size": 65536 00:16:20.670 }, 00:16:20.670 { 00:16:20.670 "name": "BaseBdev4", 00:16:20.670 "uuid": "73305a37-7c11-5078-af11-3824659315e6", 00:16:20.670 "is_configured": true, 00:16:20.670 "data_offset": 0, 00:16:20.670 "data_size": 65536 00:16:20.670 } 00:16:20.670 ] 00:16:20.670 }' 00:16:20.670 13:26:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:20.670 13:26:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.239 13:26:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:21.239 13:26:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.239 13:26:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.239 [2024-11-17 13:26:10.221413] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:21.239 [2024-11-17 13:26:10.221505] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:21.239 [2024-11-17 13:26:10.221600] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:21.239 [2024-11-17 13:26:10.221700] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:21.239 [2024-11-17 13:26:10.221711] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:21.239 13:26:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.239 13:26:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:21.239 13:26:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.239 13:26:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:16:21.239 13:26:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.239 13:26:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.239 13:26:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:21.239 13:26:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:21.239 13:26:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:16:21.239 13:26:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:16:21.239 13:26:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:21.239 13:26:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:16:21.239 13:26:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:21.239 13:26:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:21.239 13:26:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:21.239 13:26:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:16:21.239 13:26:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:21.239 13:26:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:21.239 13:26:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:16:21.499 /dev/nbd0 00:16:21.499 13:26:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:21.499 13:26:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:21.499 13:26:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:21.499 13:26:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:16:21.499 13:26:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:21.499 13:26:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:21.499 13:26:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:21.499 13:26:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:16:21.499 13:26:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:21.499 13:26:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:21.499 13:26:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:21.499 1+0 records in 00:16:21.499 1+0 records out 00:16:21.499 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000320196 s, 12.8 MB/s 00:16:21.499 13:26:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:21.499 13:26:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:16:21.499 13:26:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:21.499 13:26:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:21.499 13:26:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:16:21.499 13:26:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:21.499 13:26:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:21.499 13:26:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:16:21.762 /dev/nbd1 00:16:21.762 13:26:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:21.762 13:26:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:21.762 13:26:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:16:21.762 13:26:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:16:21.762 13:26:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:21.762 13:26:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:21.762 13:26:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:16:21.762 13:26:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:16:21.762 13:26:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:21.762 13:26:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:21.762 13:26:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:21.762 1+0 records in 00:16:21.762 1+0 records out 00:16:21.762 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00021802 s, 18.8 MB/s 00:16:21.762 13:26:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:21.762 13:26:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:16:21.762 13:26:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:21.762 13:26:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:21.762 13:26:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:16:21.762 13:26:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:21.762 13:26:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:21.762 13:26:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:16:21.762 13:26:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:16:21.762 13:26:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:21.762 13:26:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:21.762 13:26:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:21.762 13:26:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:16:21.762 13:26:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:21.762 13:26:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:22.022 13:26:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:22.022 13:26:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:22.022 13:26:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:22.022 13:26:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:22.022 13:26:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:22.022 13:26:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:22.022 13:26:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:16:22.022 13:26:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:16:22.022 13:26:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:22.022 13:26:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:22.282 13:26:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:22.282 13:26:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:22.282 13:26:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:22.282 13:26:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:22.282 13:26:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:22.282 13:26:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:22.282 13:26:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:16:22.282 13:26:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:16:22.282 13:26:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:16:22.282 13:26:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 84454 00:16:22.282 13:26:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 84454 ']' 00:16:22.282 13:26:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 84454 00:16:22.282 13:26:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:16:22.282 13:26:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:22.282 13:26:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84454 00:16:22.282 killing process with pid 84454 00:16:22.282 Received shutdown signal, test time was about 60.000000 seconds 00:16:22.282 00:16:22.282 Latency(us) 00:16:22.282 [2024-11-17T13:26:11.506Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:22.282 [2024-11-17T13:26:11.506Z] =================================================================================================================== 00:16:22.282 [2024-11-17T13:26:11.506Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:22.282 13:26:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:22.282 13:26:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:22.282 13:26:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84454' 00:16:22.282 13:26:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@973 -- # kill 84454 00:16:22.282 [2024-11-17 13:26:11.406956] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:22.282 13:26:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@978 -- # wait 84454 00:16:22.853 [2024-11-17 13:26:11.866862] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:23.793 ************************************ 00:16:23.793 13:26:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:16:23.793 00:16:23.793 real 0m19.854s 00:16:23.793 user 0m23.632s 00:16:23.793 sys 0m2.250s 00:16:23.793 13:26:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:23.793 13:26:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.793 END TEST raid5f_rebuild_test 00:16:23.793 ************************************ 00:16:23.793 13:26:12 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 4 true false true 00:16:23.793 13:26:12 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:16:23.793 13:26:12 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:23.793 13:26:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:23.793 ************************************ 00:16:23.793 START TEST raid5f_rebuild_test_sb 00:16:23.793 ************************************ 00:16:23.793 13:26:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 4 true false true 00:16:23.793 13:26:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:16:23.793 13:26:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:16:23.793 13:26:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:16:23.793 13:26:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:16:23.793 13:26:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:16:23.793 13:26:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:23.793 13:26:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:23.793 13:26:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:23.793 13:26:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:23.793 13:26:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:23.793 13:26:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:23.793 13:26:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:23.793 13:26:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:23.793 13:26:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:16:23.793 13:26:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:23.794 13:26:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:23.794 13:26:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:16:23.794 13:26:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:23.794 13:26:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:23.794 13:26:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:23.794 13:26:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:23.794 13:26:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:23.794 13:26:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:23.794 13:26:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:23.794 13:26:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:23.794 13:26:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:23.794 13:26:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:16:23.794 13:26:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:16:23.794 13:26:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:16:23.794 13:26:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:16:23.794 13:26:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:16:23.794 13:26:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:16:23.794 13:26:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=84980 00:16:23.794 13:26:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:23.794 13:26:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 84980 00:16:23.794 13:26:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 84980 ']' 00:16:23.794 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:23.794 13:26:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:23.794 13:26:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:23.794 13:26:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:23.794 13:26:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:23.794 13:26:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:24.054 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:24.054 Zero copy mechanism will not be used. 00:16:24.054 [2024-11-17 13:26:13.090808] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:16:24.054 [2024-11-17 13:26:13.090932] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84980 ] 00:16:24.054 [2024-11-17 13:26:13.248565] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:24.314 [2024-11-17 13:26:13.355836] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:24.573 [2024-11-17 13:26:13.546127] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:24.573 [2024-11-17 13:26:13.546220] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:24.832 13:26:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:24.832 13:26:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:16:24.832 13:26:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:24.832 13:26:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:24.832 13:26:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.832 13:26:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:24.832 BaseBdev1_malloc 00:16:24.832 13:26:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.832 13:26:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:24.832 13:26:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.832 13:26:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:24.832 [2024-11-17 13:26:13.947625] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:24.832 [2024-11-17 13:26:13.947700] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:24.832 [2024-11-17 13:26:13.947723] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:24.832 [2024-11-17 13:26:13.947734] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:24.832 [2024-11-17 13:26:13.949772] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:24.832 [2024-11-17 13:26:13.949813] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:24.832 BaseBdev1 00:16:24.832 13:26:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.832 13:26:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:24.832 13:26:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:24.832 13:26:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.832 13:26:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:24.832 BaseBdev2_malloc 00:16:24.832 13:26:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.832 13:26:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:24.832 13:26:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.832 13:26:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:24.832 [2024-11-17 13:26:14.001886] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:24.832 [2024-11-17 13:26:14.002032] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:24.832 [2024-11-17 13:26:14.002054] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:24.832 [2024-11-17 13:26:14.002068] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:24.832 [2024-11-17 13:26:14.004208] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:24.832 [2024-11-17 13:26:14.004256] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:24.832 BaseBdev2 00:16:24.832 13:26:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.832 13:26:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:24.832 13:26:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:24.832 13:26:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.832 13:26:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.092 BaseBdev3_malloc 00:16:25.092 13:26:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.092 13:26:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:16:25.092 13:26:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.092 13:26:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.092 [2024-11-17 13:26:14.069508] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:16:25.092 [2024-11-17 13:26:14.069564] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:25.092 [2024-11-17 13:26:14.069585] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:25.092 [2024-11-17 13:26:14.069595] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:25.092 [2024-11-17 13:26:14.071593] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:25.092 [2024-11-17 13:26:14.071634] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:25.092 BaseBdev3 00:16:25.092 13:26:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.092 13:26:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:25.092 13:26:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:16:25.092 13:26:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.092 13:26:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.092 BaseBdev4_malloc 00:16:25.092 13:26:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.092 13:26:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:16:25.092 13:26:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.092 13:26:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.092 [2024-11-17 13:26:14.124715] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:16:25.092 [2024-11-17 13:26:14.124769] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:25.092 [2024-11-17 13:26:14.124787] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:16:25.092 [2024-11-17 13:26:14.124797] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:25.092 [2024-11-17 13:26:14.126943] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:25.092 [2024-11-17 13:26:14.127017] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:16:25.092 BaseBdev4 00:16:25.092 13:26:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.092 13:26:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:16:25.092 13:26:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.092 13:26:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.092 spare_malloc 00:16:25.092 13:26:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.092 13:26:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:25.092 13:26:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.092 13:26:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.092 spare_delay 00:16:25.092 13:26:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.092 13:26:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:25.092 13:26:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.092 13:26:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.093 [2024-11-17 13:26:14.191967] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:25.093 [2024-11-17 13:26:14.192086] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:25.093 [2024-11-17 13:26:14.192108] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:16:25.093 [2024-11-17 13:26:14.192119] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:25.093 [2024-11-17 13:26:14.194104] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:25.093 [2024-11-17 13:26:14.194144] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:25.093 spare 00:16:25.093 13:26:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.093 13:26:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:16:25.093 13:26:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.093 13:26:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.093 [2024-11-17 13:26:14.204001] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:25.093 [2024-11-17 13:26:14.205732] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:25.093 [2024-11-17 13:26:14.205794] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:25.093 [2024-11-17 13:26:14.205842] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:25.093 [2024-11-17 13:26:14.206016] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:25.093 [2024-11-17 13:26:14.206033] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:25.093 [2024-11-17 13:26:14.206281] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:25.093 [2024-11-17 13:26:14.213391] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:25.093 [2024-11-17 13:26:14.213449] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:25.093 [2024-11-17 13:26:14.213636] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:25.093 13:26:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.093 13:26:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:16:25.093 13:26:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:25.093 13:26:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:25.093 13:26:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:25.093 13:26:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:25.093 13:26:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:25.093 13:26:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:25.093 13:26:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:25.093 13:26:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:25.093 13:26:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:25.093 13:26:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:25.093 13:26:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:25.093 13:26:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.093 13:26:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.093 13:26:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.093 13:26:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:25.093 "name": "raid_bdev1", 00:16:25.093 "uuid": "64b49ab3-fd0a-4b57-ad35-22e5318313bd", 00:16:25.093 "strip_size_kb": 64, 00:16:25.093 "state": "online", 00:16:25.093 "raid_level": "raid5f", 00:16:25.093 "superblock": true, 00:16:25.093 "num_base_bdevs": 4, 00:16:25.093 "num_base_bdevs_discovered": 4, 00:16:25.093 "num_base_bdevs_operational": 4, 00:16:25.093 "base_bdevs_list": [ 00:16:25.093 { 00:16:25.093 "name": "BaseBdev1", 00:16:25.093 "uuid": "17dbf4af-56ad-5104-a10b-edec45927aa8", 00:16:25.093 "is_configured": true, 00:16:25.093 "data_offset": 2048, 00:16:25.093 "data_size": 63488 00:16:25.093 }, 00:16:25.093 { 00:16:25.093 "name": "BaseBdev2", 00:16:25.093 "uuid": "812a063c-b5de-5864-a788-dead020d1c80", 00:16:25.093 "is_configured": true, 00:16:25.093 "data_offset": 2048, 00:16:25.093 "data_size": 63488 00:16:25.093 }, 00:16:25.093 { 00:16:25.093 "name": "BaseBdev3", 00:16:25.093 "uuid": "177d1e51-58a4-5d97-aeda-0d8261c848e1", 00:16:25.093 "is_configured": true, 00:16:25.093 "data_offset": 2048, 00:16:25.093 "data_size": 63488 00:16:25.093 }, 00:16:25.093 { 00:16:25.093 "name": "BaseBdev4", 00:16:25.093 "uuid": "f646152f-b104-5525-90e5-c474d6a5eafb", 00:16:25.093 "is_configured": true, 00:16:25.093 "data_offset": 2048, 00:16:25.093 "data_size": 63488 00:16:25.093 } 00:16:25.093 ] 00:16:25.093 }' 00:16:25.093 13:26:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:25.093 13:26:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.663 13:26:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:25.663 13:26:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:25.663 13:26:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.663 13:26:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.663 [2024-11-17 13:26:14.689224] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:25.663 13:26:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.663 13:26:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=190464 00:16:25.663 13:26:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:25.663 13:26:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.663 13:26:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.663 13:26:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:25.663 13:26:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.663 13:26:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:16:25.663 13:26:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:16:25.663 13:26:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:16:25.663 13:26:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:16:25.663 13:26:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:16:25.663 13:26:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:25.663 13:26:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:16:25.663 13:26:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:25.663 13:26:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:25.663 13:26:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:25.663 13:26:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:16:25.663 13:26:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:25.663 13:26:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:25.663 13:26:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:16:25.923 [2024-11-17 13:26:14.972619] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:16:25.924 /dev/nbd0 00:16:25.924 13:26:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:25.924 13:26:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:25.924 13:26:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:25.924 13:26:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:16:25.924 13:26:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:25.924 13:26:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:25.924 13:26:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:25.924 13:26:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:16:25.924 13:26:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:25.924 13:26:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:25.924 13:26:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:25.924 1+0 records in 00:16:25.924 1+0 records out 00:16:25.924 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000347108 s, 11.8 MB/s 00:16:25.924 13:26:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:25.924 13:26:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:16:25.924 13:26:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:25.924 13:26:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:25.924 13:26:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:16:25.924 13:26:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:25.924 13:26:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:25.924 13:26:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:16:25.924 13:26:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:16:25.924 13:26:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 192 00:16:25.924 13:26:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=496 oflag=direct 00:16:26.528 496+0 records in 00:16:26.528 496+0 records out 00:16:26.528 97517568 bytes (98 MB, 93 MiB) copied, 0.462543 s, 211 MB/s 00:16:26.528 13:26:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:26.528 13:26:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:26.528 13:26:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:26.528 13:26:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:26.528 13:26:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:16:26.528 13:26:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:26.528 13:26:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:26.528 [2024-11-17 13:26:15.719346] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:26.528 13:26:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:26.528 13:26:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:26.528 13:26:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:26.528 13:26:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:26.528 13:26:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:26.528 13:26:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:26.528 13:26:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:16:26.528 13:26:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:16:26.528 13:26:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:26.528 13:26:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.528 13:26:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:26.788 [2024-11-17 13:26:15.753418] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:26.788 13:26:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.788 13:26:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:26.788 13:26:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:26.788 13:26:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:26.788 13:26:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:26.788 13:26:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:26.788 13:26:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:26.788 13:26:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:26.788 13:26:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:26.788 13:26:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:26.788 13:26:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:26.788 13:26:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:26.788 13:26:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:26.788 13:26:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.788 13:26:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:26.788 13:26:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.788 13:26:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:26.788 "name": "raid_bdev1", 00:16:26.788 "uuid": "64b49ab3-fd0a-4b57-ad35-22e5318313bd", 00:16:26.788 "strip_size_kb": 64, 00:16:26.788 "state": "online", 00:16:26.788 "raid_level": "raid5f", 00:16:26.788 "superblock": true, 00:16:26.788 "num_base_bdevs": 4, 00:16:26.788 "num_base_bdevs_discovered": 3, 00:16:26.788 "num_base_bdevs_operational": 3, 00:16:26.788 "base_bdevs_list": [ 00:16:26.788 { 00:16:26.788 "name": null, 00:16:26.788 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:26.788 "is_configured": false, 00:16:26.788 "data_offset": 0, 00:16:26.788 "data_size": 63488 00:16:26.788 }, 00:16:26.788 { 00:16:26.788 "name": "BaseBdev2", 00:16:26.788 "uuid": "812a063c-b5de-5864-a788-dead020d1c80", 00:16:26.788 "is_configured": true, 00:16:26.788 "data_offset": 2048, 00:16:26.788 "data_size": 63488 00:16:26.788 }, 00:16:26.788 { 00:16:26.788 "name": "BaseBdev3", 00:16:26.788 "uuid": "177d1e51-58a4-5d97-aeda-0d8261c848e1", 00:16:26.788 "is_configured": true, 00:16:26.788 "data_offset": 2048, 00:16:26.788 "data_size": 63488 00:16:26.788 }, 00:16:26.788 { 00:16:26.788 "name": "BaseBdev4", 00:16:26.788 "uuid": "f646152f-b104-5525-90e5-c474d6a5eafb", 00:16:26.788 "is_configured": true, 00:16:26.788 "data_offset": 2048, 00:16:26.788 "data_size": 63488 00:16:26.788 } 00:16:26.788 ] 00:16:26.788 }' 00:16:26.788 13:26:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:26.788 13:26:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:27.048 13:26:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:27.048 13:26:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.048 13:26:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:27.048 [2024-11-17 13:26:16.240604] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:27.048 [2024-11-17 13:26:16.256306] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002aa50 00:16:27.048 13:26:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.048 13:26:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:27.048 [2024-11-17 13:26:16.265506] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:28.429 13:26:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:28.429 13:26:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:28.429 13:26:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:28.429 13:26:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:28.429 13:26:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:28.429 13:26:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:28.429 13:26:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:28.429 13:26:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.429 13:26:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.429 13:26:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.429 13:26:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:28.429 "name": "raid_bdev1", 00:16:28.429 "uuid": "64b49ab3-fd0a-4b57-ad35-22e5318313bd", 00:16:28.429 "strip_size_kb": 64, 00:16:28.429 "state": "online", 00:16:28.429 "raid_level": "raid5f", 00:16:28.429 "superblock": true, 00:16:28.429 "num_base_bdevs": 4, 00:16:28.429 "num_base_bdevs_discovered": 4, 00:16:28.429 "num_base_bdevs_operational": 4, 00:16:28.429 "process": { 00:16:28.429 "type": "rebuild", 00:16:28.429 "target": "spare", 00:16:28.429 "progress": { 00:16:28.429 "blocks": 19200, 00:16:28.429 "percent": 10 00:16:28.429 } 00:16:28.429 }, 00:16:28.429 "base_bdevs_list": [ 00:16:28.429 { 00:16:28.429 "name": "spare", 00:16:28.429 "uuid": "0d1bb1ac-38a3-59e9-bc4e-be3f9432bc6a", 00:16:28.429 "is_configured": true, 00:16:28.429 "data_offset": 2048, 00:16:28.429 "data_size": 63488 00:16:28.429 }, 00:16:28.429 { 00:16:28.429 "name": "BaseBdev2", 00:16:28.429 "uuid": "812a063c-b5de-5864-a788-dead020d1c80", 00:16:28.429 "is_configured": true, 00:16:28.429 "data_offset": 2048, 00:16:28.429 "data_size": 63488 00:16:28.429 }, 00:16:28.429 { 00:16:28.429 "name": "BaseBdev3", 00:16:28.429 "uuid": "177d1e51-58a4-5d97-aeda-0d8261c848e1", 00:16:28.429 "is_configured": true, 00:16:28.429 "data_offset": 2048, 00:16:28.429 "data_size": 63488 00:16:28.429 }, 00:16:28.429 { 00:16:28.429 "name": "BaseBdev4", 00:16:28.429 "uuid": "f646152f-b104-5525-90e5-c474d6a5eafb", 00:16:28.429 "is_configured": true, 00:16:28.429 "data_offset": 2048, 00:16:28.429 "data_size": 63488 00:16:28.429 } 00:16:28.429 ] 00:16:28.429 }' 00:16:28.429 13:26:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:28.429 13:26:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:28.429 13:26:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:28.429 13:26:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:28.429 13:26:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:28.429 13:26:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.429 13:26:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.429 [2024-11-17 13:26:17.400333] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:28.429 [2024-11-17 13:26:17.472328] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:28.429 [2024-11-17 13:26:17.472394] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:28.429 [2024-11-17 13:26:17.472411] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:28.429 [2024-11-17 13:26:17.472421] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:28.429 13:26:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.429 13:26:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:28.429 13:26:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:28.429 13:26:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:28.429 13:26:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:28.429 13:26:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:28.429 13:26:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:28.429 13:26:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:28.429 13:26:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:28.429 13:26:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:28.429 13:26:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:28.429 13:26:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:28.429 13:26:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.429 13:26:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:28.430 13:26:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.430 13:26:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.430 13:26:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:28.430 "name": "raid_bdev1", 00:16:28.430 "uuid": "64b49ab3-fd0a-4b57-ad35-22e5318313bd", 00:16:28.430 "strip_size_kb": 64, 00:16:28.430 "state": "online", 00:16:28.430 "raid_level": "raid5f", 00:16:28.430 "superblock": true, 00:16:28.430 "num_base_bdevs": 4, 00:16:28.430 "num_base_bdevs_discovered": 3, 00:16:28.430 "num_base_bdevs_operational": 3, 00:16:28.430 "base_bdevs_list": [ 00:16:28.430 { 00:16:28.430 "name": null, 00:16:28.430 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:28.430 "is_configured": false, 00:16:28.430 "data_offset": 0, 00:16:28.430 "data_size": 63488 00:16:28.430 }, 00:16:28.430 { 00:16:28.430 "name": "BaseBdev2", 00:16:28.430 "uuid": "812a063c-b5de-5864-a788-dead020d1c80", 00:16:28.430 "is_configured": true, 00:16:28.430 "data_offset": 2048, 00:16:28.430 "data_size": 63488 00:16:28.430 }, 00:16:28.430 { 00:16:28.430 "name": "BaseBdev3", 00:16:28.430 "uuid": "177d1e51-58a4-5d97-aeda-0d8261c848e1", 00:16:28.430 "is_configured": true, 00:16:28.430 "data_offset": 2048, 00:16:28.430 "data_size": 63488 00:16:28.430 }, 00:16:28.430 { 00:16:28.430 "name": "BaseBdev4", 00:16:28.430 "uuid": "f646152f-b104-5525-90e5-c474d6a5eafb", 00:16:28.430 "is_configured": true, 00:16:28.430 "data_offset": 2048, 00:16:28.430 "data_size": 63488 00:16:28.430 } 00:16:28.430 ] 00:16:28.430 }' 00:16:28.430 13:26:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:28.430 13:26:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.000 13:26:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:29.000 13:26:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:29.000 13:26:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:29.000 13:26:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:29.000 13:26:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:29.000 13:26:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:29.000 13:26:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.000 13:26:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:29.000 13:26:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.000 13:26:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.000 13:26:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:29.000 "name": "raid_bdev1", 00:16:29.000 "uuid": "64b49ab3-fd0a-4b57-ad35-22e5318313bd", 00:16:29.000 "strip_size_kb": 64, 00:16:29.000 "state": "online", 00:16:29.000 "raid_level": "raid5f", 00:16:29.000 "superblock": true, 00:16:29.000 "num_base_bdevs": 4, 00:16:29.000 "num_base_bdevs_discovered": 3, 00:16:29.000 "num_base_bdevs_operational": 3, 00:16:29.000 "base_bdevs_list": [ 00:16:29.000 { 00:16:29.000 "name": null, 00:16:29.000 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:29.000 "is_configured": false, 00:16:29.000 "data_offset": 0, 00:16:29.000 "data_size": 63488 00:16:29.000 }, 00:16:29.000 { 00:16:29.000 "name": "BaseBdev2", 00:16:29.000 "uuid": "812a063c-b5de-5864-a788-dead020d1c80", 00:16:29.000 "is_configured": true, 00:16:29.000 "data_offset": 2048, 00:16:29.000 "data_size": 63488 00:16:29.000 }, 00:16:29.000 { 00:16:29.000 "name": "BaseBdev3", 00:16:29.000 "uuid": "177d1e51-58a4-5d97-aeda-0d8261c848e1", 00:16:29.000 "is_configured": true, 00:16:29.000 "data_offset": 2048, 00:16:29.000 "data_size": 63488 00:16:29.000 }, 00:16:29.000 { 00:16:29.000 "name": "BaseBdev4", 00:16:29.000 "uuid": "f646152f-b104-5525-90e5-c474d6a5eafb", 00:16:29.000 "is_configured": true, 00:16:29.000 "data_offset": 2048, 00:16:29.000 "data_size": 63488 00:16:29.000 } 00:16:29.000 ] 00:16:29.000 }' 00:16:29.000 13:26:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:29.000 13:26:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:29.000 13:26:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:29.000 13:26:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:29.000 13:26:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:29.000 13:26:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.000 13:26:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.000 [2024-11-17 13:26:18.090616] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:29.000 [2024-11-17 13:26:18.105192] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002ab20 00:16:29.000 13:26:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.000 13:26:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:29.000 [2024-11-17 13:26:18.114191] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:29.940 13:26:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:29.940 13:26:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:29.940 13:26:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:29.940 13:26:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:29.940 13:26:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:29.940 13:26:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:29.940 13:26:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:29.940 13:26:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.940 13:26:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.940 13:26:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.940 13:26:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:29.940 "name": "raid_bdev1", 00:16:29.940 "uuid": "64b49ab3-fd0a-4b57-ad35-22e5318313bd", 00:16:29.940 "strip_size_kb": 64, 00:16:29.940 "state": "online", 00:16:29.940 "raid_level": "raid5f", 00:16:29.940 "superblock": true, 00:16:29.940 "num_base_bdevs": 4, 00:16:29.940 "num_base_bdevs_discovered": 4, 00:16:29.940 "num_base_bdevs_operational": 4, 00:16:29.940 "process": { 00:16:29.940 "type": "rebuild", 00:16:29.940 "target": "spare", 00:16:29.940 "progress": { 00:16:29.940 "blocks": 19200, 00:16:29.940 "percent": 10 00:16:29.940 } 00:16:29.940 }, 00:16:29.940 "base_bdevs_list": [ 00:16:29.940 { 00:16:29.940 "name": "spare", 00:16:29.940 "uuid": "0d1bb1ac-38a3-59e9-bc4e-be3f9432bc6a", 00:16:29.940 "is_configured": true, 00:16:29.940 "data_offset": 2048, 00:16:29.940 "data_size": 63488 00:16:29.940 }, 00:16:29.940 { 00:16:29.940 "name": "BaseBdev2", 00:16:29.940 "uuid": "812a063c-b5de-5864-a788-dead020d1c80", 00:16:29.940 "is_configured": true, 00:16:29.940 "data_offset": 2048, 00:16:29.940 "data_size": 63488 00:16:29.940 }, 00:16:29.940 { 00:16:29.940 "name": "BaseBdev3", 00:16:29.940 "uuid": "177d1e51-58a4-5d97-aeda-0d8261c848e1", 00:16:29.940 "is_configured": true, 00:16:29.940 "data_offset": 2048, 00:16:29.940 "data_size": 63488 00:16:29.940 }, 00:16:29.940 { 00:16:29.940 "name": "BaseBdev4", 00:16:29.940 "uuid": "f646152f-b104-5525-90e5-c474d6a5eafb", 00:16:29.940 "is_configured": true, 00:16:29.940 "data_offset": 2048, 00:16:29.940 "data_size": 63488 00:16:29.940 } 00:16:29.940 ] 00:16:29.940 }' 00:16:30.200 13:26:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:30.200 13:26:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:30.200 13:26:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:30.200 13:26:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:30.200 13:26:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:16:30.200 13:26:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:16:30.200 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:16:30.200 13:26:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:16:30.200 13:26:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:16:30.200 13:26:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=629 00:16:30.200 13:26:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:30.200 13:26:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:30.200 13:26:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:30.200 13:26:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:30.200 13:26:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:30.200 13:26:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:30.200 13:26:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:30.200 13:26:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:30.200 13:26:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.200 13:26:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.200 13:26:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.200 13:26:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:30.200 "name": "raid_bdev1", 00:16:30.200 "uuid": "64b49ab3-fd0a-4b57-ad35-22e5318313bd", 00:16:30.200 "strip_size_kb": 64, 00:16:30.200 "state": "online", 00:16:30.200 "raid_level": "raid5f", 00:16:30.200 "superblock": true, 00:16:30.200 "num_base_bdevs": 4, 00:16:30.200 "num_base_bdevs_discovered": 4, 00:16:30.200 "num_base_bdevs_operational": 4, 00:16:30.200 "process": { 00:16:30.200 "type": "rebuild", 00:16:30.200 "target": "spare", 00:16:30.200 "progress": { 00:16:30.200 "blocks": 21120, 00:16:30.200 "percent": 11 00:16:30.200 } 00:16:30.200 }, 00:16:30.200 "base_bdevs_list": [ 00:16:30.200 { 00:16:30.200 "name": "spare", 00:16:30.200 "uuid": "0d1bb1ac-38a3-59e9-bc4e-be3f9432bc6a", 00:16:30.200 "is_configured": true, 00:16:30.200 "data_offset": 2048, 00:16:30.200 "data_size": 63488 00:16:30.200 }, 00:16:30.200 { 00:16:30.200 "name": "BaseBdev2", 00:16:30.200 "uuid": "812a063c-b5de-5864-a788-dead020d1c80", 00:16:30.200 "is_configured": true, 00:16:30.200 "data_offset": 2048, 00:16:30.200 "data_size": 63488 00:16:30.200 }, 00:16:30.200 { 00:16:30.200 "name": "BaseBdev3", 00:16:30.200 "uuid": "177d1e51-58a4-5d97-aeda-0d8261c848e1", 00:16:30.200 "is_configured": true, 00:16:30.200 "data_offset": 2048, 00:16:30.200 "data_size": 63488 00:16:30.200 }, 00:16:30.200 { 00:16:30.200 "name": "BaseBdev4", 00:16:30.200 "uuid": "f646152f-b104-5525-90e5-c474d6a5eafb", 00:16:30.200 "is_configured": true, 00:16:30.200 "data_offset": 2048, 00:16:30.200 "data_size": 63488 00:16:30.200 } 00:16:30.200 ] 00:16:30.200 }' 00:16:30.200 13:26:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:30.200 13:26:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:30.200 13:26:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:30.200 13:26:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:30.200 13:26:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:31.581 13:26:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:31.581 13:26:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:31.581 13:26:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:31.581 13:26:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:31.581 13:26:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:31.581 13:26:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:31.581 13:26:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.581 13:26:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:31.581 13:26:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.581 13:26:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.581 13:26:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.581 13:26:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:31.581 "name": "raid_bdev1", 00:16:31.581 "uuid": "64b49ab3-fd0a-4b57-ad35-22e5318313bd", 00:16:31.581 "strip_size_kb": 64, 00:16:31.581 "state": "online", 00:16:31.581 "raid_level": "raid5f", 00:16:31.581 "superblock": true, 00:16:31.581 "num_base_bdevs": 4, 00:16:31.581 "num_base_bdevs_discovered": 4, 00:16:31.581 "num_base_bdevs_operational": 4, 00:16:31.581 "process": { 00:16:31.581 "type": "rebuild", 00:16:31.581 "target": "spare", 00:16:31.581 "progress": { 00:16:31.581 "blocks": 42240, 00:16:31.581 "percent": 22 00:16:31.581 } 00:16:31.581 }, 00:16:31.581 "base_bdevs_list": [ 00:16:31.581 { 00:16:31.581 "name": "spare", 00:16:31.581 "uuid": "0d1bb1ac-38a3-59e9-bc4e-be3f9432bc6a", 00:16:31.581 "is_configured": true, 00:16:31.581 "data_offset": 2048, 00:16:31.581 "data_size": 63488 00:16:31.581 }, 00:16:31.581 { 00:16:31.581 "name": "BaseBdev2", 00:16:31.581 "uuid": "812a063c-b5de-5864-a788-dead020d1c80", 00:16:31.581 "is_configured": true, 00:16:31.581 "data_offset": 2048, 00:16:31.581 "data_size": 63488 00:16:31.581 }, 00:16:31.581 { 00:16:31.581 "name": "BaseBdev3", 00:16:31.581 "uuid": "177d1e51-58a4-5d97-aeda-0d8261c848e1", 00:16:31.581 "is_configured": true, 00:16:31.581 "data_offset": 2048, 00:16:31.581 "data_size": 63488 00:16:31.581 }, 00:16:31.581 { 00:16:31.581 "name": "BaseBdev4", 00:16:31.581 "uuid": "f646152f-b104-5525-90e5-c474d6a5eafb", 00:16:31.582 "is_configured": true, 00:16:31.582 "data_offset": 2048, 00:16:31.582 "data_size": 63488 00:16:31.582 } 00:16:31.582 ] 00:16:31.582 }' 00:16:31.582 13:26:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:31.582 13:26:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:31.582 13:26:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:31.582 13:26:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:31.582 13:26:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:32.524 13:26:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:32.524 13:26:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:32.524 13:26:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:32.524 13:26:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:32.525 13:26:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:32.525 13:26:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:32.525 13:26:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.525 13:26:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.525 13:26:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:32.525 13:26:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.525 13:26:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.525 13:26:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:32.525 "name": "raid_bdev1", 00:16:32.525 "uuid": "64b49ab3-fd0a-4b57-ad35-22e5318313bd", 00:16:32.525 "strip_size_kb": 64, 00:16:32.525 "state": "online", 00:16:32.525 "raid_level": "raid5f", 00:16:32.525 "superblock": true, 00:16:32.525 "num_base_bdevs": 4, 00:16:32.525 "num_base_bdevs_discovered": 4, 00:16:32.525 "num_base_bdevs_operational": 4, 00:16:32.525 "process": { 00:16:32.525 "type": "rebuild", 00:16:32.525 "target": "spare", 00:16:32.525 "progress": { 00:16:32.525 "blocks": 65280, 00:16:32.525 "percent": 34 00:16:32.525 } 00:16:32.525 }, 00:16:32.525 "base_bdevs_list": [ 00:16:32.525 { 00:16:32.525 "name": "spare", 00:16:32.525 "uuid": "0d1bb1ac-38a3-59e9-bc4e-be3f9432bc6a", 00:16:32.525 "is_configured": true, 00:16:32.525 "data_offset": 2048, 00:16:32.525 "data_size": 63488 00:16:32.525 }, 00:16:32.525 { 00:16:32.525 "name": "BaseBdev2", 00:16:32.525 "uuid": "812a063c-b5de-5864-a788-dead020d1c80", 00:16:32.525 "is_configured": true, 00:16:32.525 "data_offset": 2048, 00:16:32.525 "data_size": 63488 00:16:32.525 }, 00:16:32.525 { 00:16:32.525 "name": "BaseBdev3", 00:16:32.525 "uuid": "177d1e51-58a4-5d97-aeda-0d8261c848e1", 00:16:32.525 "is_configured": true, 00:16:32.525 "data_offset": 2048, 00:16:32.525 "data_size": 63488 00:16:32.525 }, 00:16:32.525 { 00:16:32.525 "name": "BaseBdev4", 00:16:32.525 "uuid": "f646152f-b104-5525-90e5-c474d6a5eafb", 00:16:32.525 "is_configured": true, 00:16:32.525 "data_offset": 2048, 00:16:32.525 "data_size": 63488 00:16:32.525 } 00:16:32.525 ] 00:16:32.525 }' 00:16:32.525 13:26:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:32.525 13:26:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:32.525 13:26:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:32.525 13:26:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:32.525 13:26:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:33.906 13:26:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:33.906 13:26:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:33.906 13:26:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:33.906 13:26:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:33.906 13:26:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:33.906 13:26:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:33.906 13:26:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:33.906 13:26:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.906 13:26:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:33.906 13:26:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.906 13:26:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.906 13:26:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:33.906 "name": "raid_bdev1", 00:16:33.906 "uuid": "64b49ab3-fd0a-4b57-ad35-22e5318313bd", 00:16:33.906 "strip_size_kb": 64, 00:16:33.906 "state": "online", 00:16:33.906 "raid_level": "raid5f", 00:16:33.906 "superblock": true, 00:16:33.906 "num_base_bdevs": 4, 00:16:33.906 "num_base_bdevs_discovered": 4, 00:16:33.906 "num_base_bdevs_operational": 4, 00:16:33.906 "process": { 00:16:33.906 "type": "rebuild", 00:16:33.906 "target": "spare", 00:16:33.906 "progress": { 00:16:33.906 "blocks": 86400, 00:16:33.906 "percent": 45 00:16:33.906 } 00:16:33.906 }, 00:16:33.906 "base_bdevs_list": [ 00:16:33.906 { 00:16:33.906 "name": "spare", 00:16:33.906 "uuid": "0d1bb1ac-38a3-59e9-bc4e-be3f9432bc6a", 00:16:33.906 "is_configured": true, 00:16:33.906 "data_offset": 2048, 00:16:33.906 "data_size": 63488 00:16:33.906 }, 00:16:33.906 { 00:16:33.906 "name": "BaseBdev2", 00:16:33.906 "uuid": "812a063c-b5de-5864-a788-dead020d1c80", 00:16:33.906 "is_configured": true, 00:16:33.906 "data_offset": 2048, 00:16:33.906 "data_size": 63488 00:16:33.906 }, 00:16:33.906 { 00:16:33.906 "name": "BaseBdev3", 00:16:33.906 "uuid": "177d1e51-58a4-5d97-aeda-0d8261c848e1", 00:16:33.906 "is_configured": true, 00:16:33.906 "data_offset": 2048, 00:16:33.906 "data_size": 63488 00:16:33.906 }, 00:16:33.906 { 00:16:33.906 "name": "BaseBdev4", 00:16:33.906 "uuid": "f646152f-b104-5525-90e5-c474d6a5eafb", 00:16:33.906 "is_configured": true, 00:16:33.906 "data_offset": 2048, 00:16:33.906 "data_size": 63488 00:16:33.906 } 00:16:33.906 ] 00:16:33.906 }' 00:16:33.906 13:26:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:33.906 13:26:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:33.906 13:26:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:33.906 13:26:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:33.906 13:26:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:34.846 13:26:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:34.846 13:26:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:34.846 13:26:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:34.846 13:26:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:34.846 13:26:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:34.846 13:26:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:34.846 13:26:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:34.846 13:26:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.846 13:26:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:34.846 13:26:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:34.846 13:26:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.846 13:26:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:34.846 "name": "raid_bdev1", 00:16:34.846 "uuid": "64b49ab3-fd0a-4b57-ad35-22e5318313bd", 00:16:34.846 "strip_size_kb": 64, 00:16:34.846 "state": "online", 00:16:34.846 "raid_level": "raid5f", 00:16:34.846 "superblock": true, 00:16:34.846 "num_base_bdevs": 4, 00:16:34.846 "num_base_bdevs_discovered": 4, 00:16:34.846 "num_base_bdevs_operational": 4, 00:16:34.846 "process": { 00:16:34.846 "type": "rebuild", 00:16:34.846 "target": "spare", 00:16:34.846 "progress": { 00:16:34.846 "blocks": 109440, 00:16:34.846 "percent": 57 00:16:34.846 } 00:16:34.846 }, 00:16:34.846 "base_bdevs_list": [ 00:16:34.846 { 00:16:34.846 "name": "spare", 00:16:34.846 "uuid": "0d1bb1ac-38a3-59e9-bc4e-be3f9432bc6a", 00:16:34.846 "is_configured": true, 00:16:34.846 "data_offset": 2048, 00:16:34.846 "data_size": 63488 00:16:34.846 }, 00:16:34.846 { 00:16:34.846 "name": "BaseBdev2", 00:16:34.846 "uuid": "812a063c-b5de-5864-a788-dead020d1c80", 00:16:34.846 "is_configured": true, 00:16:34.846 "data_offset": 2048, 00:16:34.846 "data_size": 63488 00:16:34.846 }, 00:16:34.846 { 00:16:34.846 "name": "BaseBdev3", 00:16:34.846 "uuid": "177d1e51-58a4-5d97-aeda-0d8261c848e1", 00:16:34.846 "is_configured": true, 00:16:34.846 "data_offset": 2048, 00:16:34.846 "data_size": 63488 00:16:34.846 }, 00:16:34.846 { 00:16:34.846 "name": "BaseBdev4", 00:16:34.846 "uuid": "f646152f-b104-5525-90e5-c474d6a5eafb", 00:16:34.846 "is_configured": true, 00:16:34.846 "data_offset": 2048, 00:16:34.846 "data_size": 63488 00:16:34.846 } 00:16:34.846 ] 00:16:34.846 }' 00:16:34.846 13:26:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:34.846 13:26:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:34.846 13:26:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:34.846 13:26:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:34.846 13:26:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:35.785 13:26:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:35.785 13:26:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:35.785 13:26:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:35.785 13:26:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:35.785 13:26:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:35.785 13:26:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:35.785 13:26:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:35.785 13:26:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.785 13:26:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:35.785 13:26:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:35.785 13:26:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.046 13:26:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:36.046 "name": "raid_bdev1", 00:16:36.046 "uuid": "64b49ab3-fd0a-4b57-ad35-22e5318313bd", 00:16:36.046 "strip_size_kb": 64, 00:16:36.046 "state": "online", 00:16:36.046 "raid_level": "raid5f", 00:16:36.046 "superblock": true, 00:16:36.046 "num_base_bdevs": 4, 00:16:36.046 "num_base_bdevs_discovered": 4, 00:16:36.046 "num_base_bdevs_operational": 4, 00:16:36.046 "process": { 00:16:36.046 "type": "rebuild", 00:16:36.046 "target": "spare", 00:16:36.046 "progress": { 00:16:36.046 "blocks": 130560, 00:16:36.046 "percent": 68 00:16:36.046 } 00:16:36.046 }, 00:16:36.046 "base_bdevs_list": [ 00:16:36.046 { 00:16:36.046 "name": "spare", 00:16:36.046 "uuid": "0d1bb1ac-38a3-59e9-bc4e-be3f9432bc6a", 00:16:36.046 "is_configured": true, 00:16:36.046 "data_offset": 2048, 00:16:36.046 "data_size": 63488 00:16:36.046 }, 00:16:36.046 { 00:16:36.046 "name": "BaseBdev2", 00:16:36.046 "uuid": "812a063c-b5de-5864-a788-dead020d1c80", 00:16:36.046 "is_configured": true, 00:16:36.046 "data_offset": 2048, 00:16:36.046 "data_size": 63488 00:16:36.046 }, 00:16:36.046 { 00:16:36.046 "name": "BaseBdev3", 00:16:36.046 "uuid": "177d1e51-58a4-5d97-aeda-0d8261c848e1", 00:16:36.046 "is_configured": true, 00:16:36.046 "data_offset": 2048, 00:16:36.046 "data_size": 63488 00:16:36.046 }, 00:16:36.046 { 00:16:36.046 "name": "BaseBdev4", 00:16:36.046 "uuid": "f646152f-b104-5525-90e5-c474d6a5eafb", 00:16:36.046 "is_configured": true, 00:16:36.046 "data_offset": 2048, 00:16:36.046 "data_size": 63488 00:16:36.046 } 00:16:36.046 ] 00:16:36.046 }' 00:16:36.046 13:26:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:36.046 13:26:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:36.046 13:26:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:36.046 13:26:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:36.046 13:26:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:36.987 13:26:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:36.987 13:26:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:36.987 13:26:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:36.987 13:26:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:36.987 13:26:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:36.987 13:26:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:36.987 13:26:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:36.987 13:26:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.987 13:26:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:36.987 13:26:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.987 13:26:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.987 13:26:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:36.987 "name": "raid_bdev1", 00:16:36.987 "uuid": "64b49ab3-fd0a-4b57-ad35-22e5318313bd", 00:16:36.987 "strip_size_kb": 64, 00:16:36.987 "state": "online", 00:16:36.987 "raid_level": "raid5f", 00:16:36.987 "superblock": true, 00:16:36.987 "num_base_bdevs": 4, 00:16:36.987 "num_base_bdevs_discovered": 4, 00:16:36.987 "num_base_bdevs_operational": 4, 00:16:36.987 "process": { 00:16:36.987 "type": "rebuild", 00:16:36.987 "target": "spare", 00:16:36.987 "progress": { 00:16:36.987 "blocks": 151680, 00:16:36.987 "percent": 79 00:16:36.987 } 00:16:36.987 }, 00:16:36.987 "base_bdevs_list": [ 00:16:36.987 { 00:16:36.987 "name": "spare", 00:16:36.987 "uuid": "0d1bb1ac-38a3-59e9-bc4e-be3f9432bc6a", 00:16:36.987 "is_configured": true, 00:16:36.987 "data_offset": 2048, 00:16:36.987 "data_size": 63488 00:16:36.987 }, 00:16:36.987 { 00:16:36.987 "name": "BaseBdev2", 00:16:36.987 "uuid": "812a063c-b5de-5864-a788-dead020d1c80", 00:16:36.987 "is_configured": true, 00:16:36.987 "data_offset": 2048, 00:16:36.987 "data_size": 63488 00:16:36.987 }, 00:16:36.987 { 00:16:36.987 "name": "BaseBdev3", 00:16:36.987 "uuid": "177d1e51-58a4-5d97-aeda-0d8261c848e1", 00:16:36.987 "is_configured": true, 00:16:36.987 "data_offset": 2048, 00:16:36.987 "data_size": 63488 00:16:36.987 }, 00:16:36.987 { 00:16:36.987 "name": "BaseBdev4", 00:16:36.987 "uuid": "f646152f-b104-5525-90e5-c474d6a5eafb", 00:16:36.987 "is_configured": true, 00:16:36.987 "data_offset": 2048, 00:16:36.987 "data_size": 63488 00:16:36.987 } 00:16:36.987 ] 00:16:36.987 }' 00:16:36.987 13:26:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:37.247 13:26:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:37.247 13:26:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:37.247 13:26:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:37.247 13:26:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:38.186 13:26:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:38.186 13:26:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:38.186 13:26:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:38.186 13:26:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:38.186 13:26:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:38.186 13:26:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:38.186 13:26:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:38.186 13:26:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:38.186 13:26:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.186 13:26:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:38.186 13:26:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.186 13:26:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:38.186 "name": "raid_bdev1", 00:16:38.187 "uuid": "64b49ab3-fd0a-4b57-ad35-22e5318313bd", 00:16:38.187 "strip_size_kb": 64, 00:16:38.187 "state": "online", 00:16:38.187 "raid_level": "raid5f", 00:16:38.187 "superblock": true, 00:16:38.187 "num_base_bdevs": 4, 00:16:38.187 "num_base_bdevs_discovered": 4, 00:16:38.187 "num_base_bdevs_operational": 4, 00:16:38.187 "process": { 00:16:38.187 "type": "rebuild", 00:16:38.187 "target": "spare", 00:16:38.187 "progress": { 00:16:38.187 "blocks": 174720, 00:16:38.187 "percent": 91 00:16:38.187 } 00:16:38.187 }, 00:16:38.187 "base_bdevs_list": [ 00:16:38.187 { 00:16:38.187 "name": "spare", 00:16:38.187 "uuid": "0d1bb1ac-38a3-59e9-bc4e-be3f9432bc6a", 00:16:38.187 "is_configured": true, 00:16:38.187 "data_offset": 2048, 00:16:38.187 "data_size": 63488 00:16:38.187 }, 00:16:38.187 { 00:16:38.187 "name": "BaseBdev2", 00:16:38.187 "uuid": "812a063c-b5de-5864-a788-dead020d1c80", 00:16:38.187 "is_configured": true, 00:16:38.187 "data_offset": 2048, 00:16:38.187 "data_size": 63488 00:16:38.187 }, 00:16:38.187 { 00:16:38.187 "name": "BaseBdev3", 00:16:38.187 "uuid": "177d1e51-58a4-5d97-aeda-0d8261c848e1", 00:16:38.187 "is_configured": true, 00:16:38.187 "data_offset": 2048, 00:16:38.187 "data_size": 63488 00:16:38.187 }, 00:16:38.187 { 00:16:38.187 "name": "BaseBdev4", 00:16:38.187 "uuid": "f646152f-b104-5525-90e5-c474d6a5eafb", 00:16:38.187 "is_configured": true, 00:16:38.187 "data_offset": 2048, 00:16:38.187 "data_size": 63488 00:16:38.187 } 00:16:38.187 ] 00:16:38.187 }' 00:16:38.187 13:26:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:38.187 13:26:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:38.187 13:26:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:38.446 13:26:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:38.446 13:26:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:39.016 [2024-11-17 13:26:28.165194] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:39.016 [2024-11-17 13:26:28.165357] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:39.016 [2024-11-17 13:26:28.165525] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:39.276 13:26:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:39.276 13:26:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:39.276 13:26:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:39.276 13:26:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:39.276 13:26:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:39.276 13:26:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:39.276 13:26:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:39.276 13:26:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.276 13:26:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:39.276 13:26:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:39.276 13:26:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.276 13:26:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:39.276 "name": "raid_bdev1", 00:16:39.276 "uuid": "64b49ab3-fd0a-4b57-ad35-22e5318313bd", 00:16:39.276 "strip_size_kb": 64, 00:16:39.276 "state": "online", 00:16:39.276 "raid_level": "raid5f", 00:16:39.276 "superblock": true, 00:16:39.276 "num_base_bdevs": 4, 00:16:39.276 "num_base_bdevs_discovered": 4, 00:16:39.276 "num_base_bdevs_operational": 4, 00:16:39.276 "base_bdevs_list": [ 00:16:39.276 { 00:16:39.276 "name": "spare", 00:16:39.276 "uuid": "0d1bb1ac-38a3-59e9-bc4e-be3f9432bc6a", 00:16:39.276 "is_configured": true, 00:16:39.276 "data_offset": 2048, 00:16:39.276 "data_size": 63488 00:16:39.276 }, 00:16:39.276 { 00:16:39.276 "name": "BaseBdev2", 00:16:39.276 "uuid": "812a063c-b5de-5864-a788-dead020d1c80", 00:16:39.276 "is_configured": true, 00:16:39.276 "data_offset": 2048, 00:16:39.276 "data_size": 63488 00:16:39.276 }, 00:16:39.276 { 00:16:39.276 "name": "BaseBdev3", 00:16:39.276 "uuid": "177d1e51-58a4-5d97-aeda-0d8261c848e1", 00:16:39.276 "is_configured": true, 00:16:39.276 "data_offset": 2048, 00:16:39.276 "data_size": 63488 00:16:39.276 }, 00:16:39.276 { 00:16:39.276 "name": "BaseBdev4", 00:16:39.276 "uuid": "f646152f-b104-5525-90e5-c474d6a5eafb", 00:16:39.276 "is_configured": true, 00:16:39.276 "data_offset": 2048, 00:16:39.276 "data_size": 63488 00:16:39.276 } 00:16:39.276 ] 00:16:39.276 }' 00:16:39.276 13:26:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:39.537 13:26:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:39.537 13:26:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:39.537 13:26:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:39.537 13:26:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:16:39.537 13:26:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:39.537 13:26:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:39.537 13:26:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:39.537 13:26:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:39.537 13:26:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:39.537 13:26:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:39.537 13:26:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:39.537 13:26:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.537 13:26:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:39.537 13:26:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.537 13:26:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:39.537 "name": "raid_bdev1", 00:16:39.537 "uuid": "64b49ab3-fd0a-4b57-ad35-22e5318313bd", 00:16:39.537 "strip_size_kb": 64, 00:16:39.537 "state": "online", 00:16:39.537 "raid_level": "raid5f", 00:16:39.537 "superblock": true, 00:16:39.537 "num_base_bdevs": 4, 00:16:39.537 "num_base_bdevs_discovered": 4, 00:16:39.537 "num_base_bdevs_operational": 4, 00:16:39.537 "base_bdevs_list": [ 00:16:39.537 { 00:16:39.537 "name": "spare", 00:16:39.537 "uuid": "0d1bb1ac-38a3-59e9-bc4e-be3f9432bc6a", 00:16:39.537 "is_configured": true, 00:16:39.537 "data_offset": 2048, 00:16:39.537 "data_size": 63488 00:16:39.537 }, 00:16:39.537 { 00:16:39.537 "name": "BaseBdev2", 00:16:39.537 "uuid": "812a063c-b5de-5864-a788-dead020d1c80", 00:16:39.537 "is_configured": true, 00:16:39.537 "data_offset": 2048, 00:16:39.537 "data_size": 63488 00:16:39.537 }, 00:16:39.537 { 00:16:39.537 "name": "BaseBdev3", 00:16:39.537 "uuid": "177d1e51-58a4-5d97-aeda-0d8261c848e1", 00:16:39.537 "is_configured": true, 00:16:39.537 "data_offset": 2048, 00:16:39.537 "data_size": 63488 00:16:39.537 }, 00:16:39.537 { 00:16:39.537 "name": "BaseBdev4", 00:16:39.537 "uuid": "f646152f-b104-5525-90e5-c474d6a5eafb", 00:16:39.537 "is_configured": true, 00:16:39.537 "data_offset": 2048, 00:16:39.537 "data_size": 63488 00:16:39.537 } 00:16:39.537 ] 00:16:39.537 }' 00:16:39.537 13:26:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:39.537 13:26:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:39.537 13:26:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:39.537 13:26:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:39.537 13:26:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:16:39.537 13:26:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:39.537 13:26:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:39.537 13:26:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:39.537 13:26:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:39.537 13:26:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:39.537 13:26:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:39.537 13:26:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:39.537 13:26:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:39.537 13:26:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:39.537 13:26:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:39.537 13:26:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:39.537 13:26:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.537 13:26:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:39.537 13:26:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.537 13:26:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:39.537 "name": "raid_bdev1", 00:16:39.537 "uuid": "64b49ab3-fd0a-4b57-ad35-22e5318313bd", 00:16:39.537 "strip_size_kb": 64, 00:16:39.537 "state": "online", 00:16:39.537 "raid_level": "raid5f", 00:16:39.537 "superblock": true, 00:16:39.537 "num_base_bdevs": 4, 00:16:39.537 "num_base_bdevs_discovered": 4, 00:16:39.537 "num_base_bdevs_operational": 4, 00:16:39.537 "base_bdevs_list": [ 00:16:39.537 { 00:16:39.537 "name": "spare", 00:16:39.537 "uuid": "0d1bb1ac-38a3-59e9-bc4e-be3f9432bc6a", 00:16:39.537 "is_configured": true, 00:16:39.537 "data_offset": 2048, 00:16:39.537 "data_size": 63488 00:16:39.537 }, 00:16:39.537 { 00:16:39.537 "name": "BaseBdev2", 00:16:39.537 "uuid": "812a063c-b5de-5864-a788-dead020d1c80", 00:16:39.537 "is_configured": true, 00:16:39.537 "data_offset": 2048, 00:16:39.537 "data_size": 63488 00:16:39.537 }, 00:16:39.537 { 00:16:39.537 "name": "BaseBdev3", 00:16:39.537 "uuid": "177d1e51-58a4-5d97-aeda-0d8261c848e1", 00:16:39.537 "is_configured": true, 00:16:39.537 "data_offset": 2048, 00:16:39.537 "data_size": 63488 00:16:39.537 }, 00:16:39.537 { 00:16:39.537 "name": "BaseBdev4", 00:16:39.537 "uuid": "f646152f-b104-5525-90e5-c474d6a5eafb", 00:16:39.537 "is_configured": true, 00:16:39.537 "data_offset": 2048, 00:16:39.537 "data_size": 63488 00:16:39.537 } 00:16:39.537 ] 00:16:39.537 }' 00:16:39.537 13:26:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:39.537 13:26:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.128 13:26:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:40.128 13:26:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.128 13:26:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.128 [2024-11-17 13:26:29.158360] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:40.128 [2024-11-17 13:26:29.158468] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:40.128 [2024-11-17 13:26:29.158566] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:40.128 [2024-11-17 13:26:29.158709] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:40.128 [2024-11-17 13:26:29.158773] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:40.128 13:26:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.128 13:26:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:40.128 13:26:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.128 13:26:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:16:40.128 13:26:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.128 13:26:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.128 13:26:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:40.128 13:26:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:40.128 13:26:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:16:40.128 13:26:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:16:40.128 13:26:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:40.128 13:26:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:16:40.128 13:26:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:40.128 13:26:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:40.128 13:26:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:40.128 13:26:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:16:40.128 13:26:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:40.128 13:26:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:40.128 13:26:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:16:40.389 /dev/nbd0 00:16:40.389 13:26:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:40.389 13:26:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:40.389 13:26:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:40.389 13:26:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:16:40.389 13:26:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:40.389 13:26:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:40.389 13:26:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:40.389 13:26:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:16:40.389 13:26:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:40.389 13:26:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:40.389 13:26:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:40.389 1+0 records in 00:16:40.389 1+0 records out 00:16:40.389 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00054946 s, 7.5 MB/s 00:16:40.389 13:26:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:40.389 13:26:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:16:40.389 13:26:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:40.389 13:26:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:40.389 13:26:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:16:40.389 13:26:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:40.389 13:26:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:40.389 13:26:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:16:40.649 /dev/nbd1 00:16:40.649 13:26:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:40.649 13:26:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:40.649 13:26:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:16:40.649 13:26:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:16:40.649 13:26:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:40.649 13:26:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:40.649 13:26:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:16:40.649 13:26:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:16:40.649 13:26:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:40.649 13:26:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:40.649 13:26:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:40.649 1+0 records in 00:16:40.650 1+0 records out 00:16:40.650 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000404765 s, 10.1 MB/s 00:16:40.650 13:26:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:40.650 13:26:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:16:40.650 13:26:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:40.650 13:26:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:40.650 13:26:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:16:40.650 13:26:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:40.650 13:26:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:40.650 13:26:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:16:40.909 13:26:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:16:40.909 13:26:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:40.909 13:26:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:40.909 13:26:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:40.909 13:26:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:16:40.909 13:26:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:40.909 13:26:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:40.909 13:26:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:40.909 13:26:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:40.909 13:26:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:40.909 13:26:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:40.909 13:26:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:40.909 13:26:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:40.909 13:26:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:16:40.909 13:26:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:16:40.909 13:26:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:40.909 13:26:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:41.169 13:26:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:41.169 13:26:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:41.169 13:26:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:41.169 13:26:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:41.169 13:26:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:41.169 13:26:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:41.169 13:26:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:16:41.169 13:26:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:16:41.169 13:26:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:16:41.169 13:26:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:16:41.169 13:26:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.169 13:26:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:41.169 13:26:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.169 13:26:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:41.169 13:26:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.169 13:26:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:41.169 [2024-11-17 13:26:30.346477] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:41.169 [2024-11-17 13:26:30.346538] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:41.169 [2024-11-17 13:26:30.346562] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:16:41.169 [2024-11-17 13:26:30.346572] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:41.169 [2024-11-17 13:26:30.348717] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:41.169 [2024-11-17 13:26:30.348756] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:41.169 [2024-11-17 13:26:30.348830] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:41.169 [2024-11-17 13:26:30.348876] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:41.169 [2024-11-17 13:26:30.349019] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:41.169 [2024-11-17 13:26:30.349097] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:41.169 [2024-11-17 13:26:30.349163] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:41.169 spare 00:16:41.169 13:26:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.169 13:26:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:16:41.169 13:26:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.169 13:26:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:41.430 [2024-11-17 13:26:30.449076] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:16:41.430 [2024-11-17 13:26:30.449113] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:41.430 [2024-11-17 13:26:30.449395] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000491d0 00:16:41.430 [2024-11-17 13:26:30.456195] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:16:41.430 [2024-11-17 13:26:30.456217] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:16:41.430 [2024-11-17 13:26:30.456410] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:41.430 13:26:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.430 13:26:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:16:41.430 13:26:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:41.430 13:26:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:41.430 13:26:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:41.430 13:26:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:41.430 13:26:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:41.430 13:26:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:41.430 13:26:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:41.430 13:26:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:41.430 13:26:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:41.430 13:26:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:41.430 13:26:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:41.430 13:26:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.430 13:26:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:41.430 13:26:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.430 13:26:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:41.430 "name": "raid_bdev1", 00:16:41.430 "uuid": "64b49ab3-fd0a-4b57-ad35-22e5318313bd", 00:16:41.430 "strip_size_kb": 64, 00:16:41.430 "state": "online", 00:16:41.430 "raid_level": "raid5f", 00:16:41.430 "superblock": true, 00:16:41.430 "num_base_bdevs": 4, 00:16:41.430 "num_base_bdevs_discovered": 4, 00:16:41.430 "num_base_bdevs_operational": 4, 00:16:41.430 "base_bdevs_list": [ 00:16:41.430 { 00:16:41.430 "name": "spare", 00:16:41.430 "uuid": "0d1bb1ac-38a3-59e9-bc4e-be3f9432bc6a", 00:16:41.430 "is_configured": true, 00:16:41.430 "data_offset": 2048, 00:16:41.430 "data_size": 63488 00:16:41.430 }, 00:16:41.430 { 00:16:41.430 "name": "BaseBdev2", 00:16:41.430 "uuid": "812a063c-b5de-5864-a788-dead020d1c80", 00:16:41.430 "is_configured": true, 00:16:41.430 "data_offset": 2048, 00:16:41.430 "data_size": 63488 00:16:41.430 }, 00:16:41.430 { 00:16:41.430 "name": "BaseBdev3", 00:16:41.430 "uuid": "177d1e51-58a4-5d97-aeda-0d8261c848e1", 00:16:41.430 "is_configured": true, 00:16:41.430 "data_offset": 2048, 00:16:41.430 "data_size": 63488 00:16:41.430 }, 00:16:41.430 { 00:16:41.430 "name": "BaseBdev4", 00:16:41.430 "uuid": "f646152f-b104-5525-90e5-c474d6a5eafb", 00:16:41.430 "is_configured": true, 00:16:41.430 "data_offset": 2048, 00:16:41.430 "data_size": 63488 00:16:41.430 } 00:16:41.430 ] 00:16:41.430 }' 00:16:41.430 13:26:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:41.430 13:26:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:42.000 13:26:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:42.000 13:26:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:42.000 13:26:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:42.000 13:26:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:42.000 13:26:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:42.000 13:26:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:42.000 13:26:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:42.000 13:26:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.000 13:26:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:42.000 13:26:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.000 13:26:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:42.000 "name": "raid_bdev1", 00:16:42.000 "uuid": "64b49ab3-fd0a-4b57-ad35-22e5318313bd", 00:16:42.000 "strip_size_kb": 64, 00:16:42.000 "state": "online", 00:16:42.000 "raid_level": "raid5f", 00:16:42.000 "superblock": true, 00:16:42.000 "num_base_bdevs": 4, 00:16:42.000 "num_base_bdevs_discovered": 4, 00:16:42.000 "num_base_bdevs_operational": 4, 00:16:42.000 "base_bdevs_list": [ 00:16:42.000 { 00:16:42.000 "name": "spare", 00:16:42.000 "uuid": "0d1bb1ac-38a3-59e9-bc4e-be3f9432bc6a", 00:16:42.000 "is_configured": true, 00:16:42.000 "data_offset": 2048, 00:16:42.000 "data_size": 63488 00:16:42.000 }, 00:16:42.000 { 00:16:42.000 "name": "BaseBdev2", 00:16:42.000 "uuid": "812a063c-b5de-5864-a788-dead020d1c80", 00:16:42.000 "is_configured": true, 00:16:42.000 "data_offset": 2048, 00:16:42.000 "data_size": 63488 00:16:42.000 }, 00:16:42.000 { 00:16:42.000 "name": "BaseBdev3", 00:16:42.000 "uuid": "177d1e51-58a4-5d97-aeda-0d8261c848e1", 00:16:42.000 "is_configured": true, 00:16:42.000 "data_offset": 2048, 00:16:42.000 "data_size": 63488 00:16:42.000 }, 00:16:42.000 { 00:16:42.000 "name": "BaseBdev4", 00:16:42.000 "uuid": "f646152f-b104-5525-90e5-c474d6a5eafb", 00:16:42.000 "is_configured": true, 00:16:42.000 "data_offset": 2048, 00:16:42.000 "data_size": 63488 00:16:42.000 } 00:16:42.000 ] 00:16:42.000 }' 00:16:42.000 13:26:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:42.000 13:26:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:42.000 13:26:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:42.000 13:26:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:42.000 13:26:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:42.000 13:26:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:16:42.000 13:26:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.000 13:26:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:42.000 13:26:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.000 13:26:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:16:42.000 13:26:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:42.000 13:26:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.000 13:26:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:42.000 [2024-11-17 13:26:31.135545] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:42.000 13:26:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.000 13:26:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:42.000 13:26:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:42.000 13:26:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:42.000 13:26:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:42.000 13:26:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:42.000 13:26:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:42.000 13:26:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:42.000 13:26:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:42.000 13:26:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:42.000 13:26:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:42.000 13:26:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:42.000 13:26:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:42.000 13:26:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.000 13:26:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:42.000 13:26:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.000 13:26:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:42.000 "name": "raid_bdev1", 00:16:42.000 "uuid": "64b49ab3-fd0a-4b57-ad35-22e5318313bd", 00:16:42.000 "strip_size_kb": 64, 00:16:42.000 "state": "online", 00:16:42.000 "raid_level": "raid5f", 00:16:42.000 "superblock": true, 00:16:42.000 "num_base_bdevs": 4, 00:16:42.000 "num_base_bdevs_discovered": 3, 00:16:42.000 "num_base_bdevs_operational": 3, 00:16:42.000 "base_bdevs_list": [ 00:16:42.000 { 00:16:42.000 "name": null, 00:16:42.000 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:42.000 "is_configured": false, 00:16:42.000 "data_offset": 0, 00:16:42.001 "data_size": 63488 00:16:42.001 }, 00:16:42.001 { 00:16:42.001 "name": "BaseBdev2", 00:16:42.001 "uuid": "812a063c-b5de-5864-a788-dead020d1c80", 00:16:42.001 "is_configured": true, 00:16:42.001 "data_offset": 2048, 00:16:42.001 "data_size": 63488 00:16:42.001 }, 00:16:42.001 { 00:16:42.001 "name": "BaseBdev3", 00:16:42.001 "uuid": "177d1e51-58a4-5d97-aeda-0d8261c848e1", 00:16:42.001 "is_configured": true, 00:16:42.001 "data_offset": 2048, 00:16:42.001 "data_size": 63488 00:16:42.001 }, 00:16:42.001 { 00:16:42.001 "name": "BaseBdev4", 00:16:42.001 "uuid": "f646152f-b104-5525-90e5-c474d6a5eafb", 00:16:42.001 "is_configured": true, 00:16:42.001 "data_offset": 2048, 00:16:42.001 "data_size": 63488 00:16:42.001 } 00:16:42.001 ] 00:16:42.001 }' 00:16:42.001 13:26:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:42.001 13:26:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:42.571 13:26:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:42.571 13:26:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.571 13:26:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:42.571 [2024-11-17 13:26:31.590855] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:42.571 [2024-11-17 13:26:31.591118] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:42.571 [2024-11-17 13:26:31.591183] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:42.571 [2024-11-17 13:26:31.591269] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:42.571 [2024-11-17 13:26:31.604943] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000492a0 00:16:42.571 13:26:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.571 13:26:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:16:42.571 [2024-11-17 13:26:31.613514] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:43.510 13:26:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:43.510 13:26:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:43.510 13:26:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:43.510 13:26:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:43.510 13:26:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:43.510 13:26:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:43.510 13:26:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.510 13:26:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:43.510 13:26:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:43.510 13:26:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.510 13:26:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:43.510 "name": "raid_bdev1", 00:16:43.510 "uuid": "64b49ab3-fd0a-4b57-ad35-22e5318313bd", 00:16:43.510 "strip_size_kb": 64, 00:16:43.510 "state": "online", 00:16:43.510 "raid_level": "raid5f", 00:16:43.510 "superblock": true, 00:16:43.510 "num_base_bdevs": 4, 00:16:43.510 "num_base_bdevs_discovered": 4, 00:16:43.510 "num_base_bdevs_operational": 4, 00:16:43.510 "process": { 00:16:43.510 "type": "rebuild", 00:16:43.510 "target": "spare", 00:16:43.510 "progress": { 00:16:43.510 "blocks": 19200, 00:16:43.510 "percent": 10 00:16:43.510 } 00:16:43.510 }, 00:16:43.510 "base_bdevs_list": [ 00:16:43.510 { 00:16:43.510 "name": "spare", 00:16:43.510 "uuid": "0d1bb1ac-38a3-59e9-bc4e-be3f9432bc6a", 00:16:43.510 "is_configured": true, 00:16:43.510 "data_offset": 2048, 00:16:43.510 "data_size": 63488 00:16:43.510 }, 00:16:43.510 { 00:16:43.510 "name": "BaseBdev2", 00:16:43.510 "uuid": "812a063c-b5de-5864-a788-dead020d1c80", 00:16:43.510 "is_configured": true, 00:16:43.510 "data_offset": 2048, 00:16:43.510 "data_size": 63488 00:16:43.510 }, 00:16:43.510 { 00:16:43.510 "name": "BaseBdev3", 00:16:43.510 "uuid": "177d1e51-58a4-5d97-aeda-0d8261c848e1", 00:16:43.510 "is_configured": true, 00:16:43.510 "data_offset": 2048, 00:16:43.510 "data_size": 63488 00:16:43.510 }, 00:16:43.510 { 00:16:43.510 "name": "BaseBdev4", 00:16:43.510 "uuid": "f646152f-b104-5525-90e5-c474d6a5eafb", 00:16:43.510 "is_configured": true, 00:16:43.510 "data_offset": 2048, 00:16:43.510 "data_size": 63488 00:16:43.510 } 00:16:43.510 ] 00:16:43.510 }' 00:16:43.510 13:26:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:43.510 13:26:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:43.510 13:26:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:43.770 13:26:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:43.770 13:26:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:16:43.770 13:26:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.770 13:26:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:43.770 [2024-11-17 13:26:32.768092] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:43.770 [2024-11-17 13:26:32.819837] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:43.770 [2024-11-17 13:26:32.819929] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:43.770 [2024-11-17 13:26:32.819950] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:43.770 [2024-11-17 13:26:32.819961] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:43.770 13:26:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.770 13:26:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:43.771 13:26:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:43.771 13:26:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:43.771 13:26:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:43.771 13:26:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:43.771 13:26:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:43.771 13:26:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:43.771 13:26:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:43.771 13:26:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:43.771 13:26:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:43.771 13:26:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:43.771 13:26:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.771 13:26:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:43.771 13:26:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:43.771 13:26:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.771 13:26:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:43.771 "name": "raid_bdev1", 00:16:43.771 "uuid": "64b49ab3-fd0a-4b57-ad35-22e5318313bd", 00:16:43.771 "strip_size_kb": 64, 00:16:43.771 "state": "online", 00:16:43.771 "raid_level": "raid5f", 00:16:43.771 "superblock": true, 00:16:43.771 "num_base_bdevs": 4, 00:16:43.771 "num_base_bdevs_discovered": 3, 00:16:43.771 "num_base_bdevs_operational": 3, 00:16:43.771 "base_bdevs_list": [ 00:16:43.771 { 00:16:43.771 "name": null, 00:16:43.771 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:43.771 "is_configured": false, 00:16:43.771 "data_offset": 0, 00:16:43.771 "data_size": 63488 00:16:43.771 }, 00:16:43.771 { 00:16:43.771 "name": "BaseBdev2", 00:16:43.771 "uuid": "812a063c-b5de-5864-a788-dead020d1c80", 00:16:43.771 "is_configured": true, 00:16:43.771 "data_offset": 2048, 00:16:43.771 "data_size": 63488 00:16:43.771 }, 00:16:43.771 { 00:16:43.771 "name": "BaseBdev3", 00:16:43.771 "uuid": "177d1e51-58a4-5d97-aeda-0d8261c848e1", 00:16:43.771 "is_configured": true, 00:16:43.771 "data_offset": 2048, 00:16:43.771 "data_size": 63488 00:16:43.771 }, 00:16:43.771 { 00:16:43.771 "name": "BaseBdev4", 00:16:43.771 "uuid": "f646152f-b104-5525-90e5-c474d6a5eafb", 00:16:43.771 "is_configured": true, 00:16:43.771 "data_offset": 2048, 00:16:43.771 "data_size": 63488 00:16:43.771 } 00:16:43.771 ] 00:16:43.771 }' 00:16:43.771 13:26:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:43.771 13:26:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:44.340 13:26:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:44.340 13:26:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.340 13:26:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:44.340 [2024-11-17 13:26:33.310457] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:44.340 [2024-11-17 13:26:33.310589] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:44.340 [2024-11-17 13:26:33.310622] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:16:44.340 [2024-11-17 13:26:33.310635] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:44.340 [2024-11-17 13:26:33.311125] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:44.340 [2024-11-17 13:26:33.311147] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:44.340 [2024-11-17 13:26:33.311329] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:44.340 [2024-11-17 13:26:33.311383] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:44.340 [2024-11-17 13:26:33.311432] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:44.340 [2024-11-17 13:26:33.311532] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:44.340 [2024-11-17 13:26:33.326571] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000049370 00:16:44.340 spare 00:16:44.340 13:26:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.340 13:26:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:16:44.340 [2024-11-17 13:26:33.335985] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:45.280 13:26:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:45.280 13:26:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:45.280 13:26:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:45.280 13:26:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:45.280 13:26:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:45.280 13:26:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:45.280 13:26:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.280 13:26:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:45.280 13:26:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:45.280 13:26:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.280 13:26:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:45.280 "name": "raid_bdev1", 00:16:45.280 "uuid": "64b49ab3-fd0a-4b57-ad35-22e5318313bd", 00:16:45.280 "strip_size_kb": 64, 00:16:45.280 "state": "online", 00:16:45.280 "raid_level": "raid5f", 00:16:45.280 "superblock": true, 00:16:45.280 "num_base_bdevs": 4, 00:16:45.280 "num_base_bdevs_discovered": 4, 00:16:45.280 "num_base_bdevs_operational": 4, 00:16:45.280 "process": { 00:16:45.280 "type": "rebuild", 00:16:45.280 "target": "spare", 00:16:45.280 "progress": { 00:16:45.280 "blocks": 19200, 00:16:45.280 "percent": 10 00:16:45.280 } 00:16:45.280 }, 00:16:45.280 "base_bdevs_list": [ 00:16:45.280 { 00:16:45.280 "name": "spare", 00:16:45.280 "uuid": "0d1bb1ac-38a3-59e9-bc4e-be3f9432bc6a", 00:16:45.280 "is_configured": true, 00:16:45.280 "data_offset": 2048, 00:16:45.280 "data_size": 63488 00:16:45.280 }, 00:16:45.280 { 00:16:45.280 "name": "BaseBdev2", 00:16:45.280 "uuid": "812a063c-b5de-5864-a788-dead020d1c80", 00:16:45.280 "is_configured": true, 00:16:45.280 "data_offset": 2048, 00:16:45.280 "data_size": 63488 00:16:45.280 }, 00:16:45.280 { 00:16:45.280 "name": "BaseBdev3", 00:16:45.280 "uuid": "177d1e51-58a4-5d97-aeda-0d8261c848e1", 00:16:45.280 "is_configured": true, 00:16:45.280 "data_offset": 2048, 00:16:45.280 "data_size": 63488 00:16:45.280 }, 00:16:45.280 { 00:16:45.280 "name": "BaseBdev4", 00:16:45.280 "uuid": "f646152f-b104-5525-90e5-c474d6a5eafb", 00:16:45.280 "is_configured": true, 00:16:45.280 "data_offset": 2048, 00:16:45.280 "data_size": 63488 00:16:45.280 } 00:16:45.280 ] 00:16:45.280 }' 00:16:45.280 13:26:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:45.280 13:26:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:45.280 13:26:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:45.280 13:26:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:45.280 13:26:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:16:45.280 13:26:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.280 13:26:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:45.280 [2024-11-17 13:26:34.491108] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:45.541 [2024-11-17 13:26:34.542435] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:45.541 [2024-11-17 13:26:34.542533] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:45.541 [2024-11-17 13:26:34.542556] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:45.541 [2024-11-17 13:26:34.542563] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:45.541 13:26:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.541 13:26:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:45.541 13:26:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:45.541 13:26:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:45.541 13:26:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:45.541 13:26:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:45.541 13:26:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:45.541 13:26:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:45.541 13:26:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:45.541 13:26:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:45.541 13:26:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:45.541 13:26:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:45.541 13:26:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:45.541 13:26:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.541 13:26:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:45.541 13:26:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.541 13:26:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:45.541 "name": "raid_bdev1", 00:16:45.541 "uuid": "64b49ab3-fd0a-4b57-ad35-22e5318313bd", 00:16:45.541 "strip_size_kb": 64, 00:16:45.541 "state": "online", 00:16:45.541 "raid_level": "raid5f", 00:16:45.541 "superblock": true, 00:16:45.541 "num_base_bdevs": 4, 00:16:45.541 "num_base_bdevs_discovered": 3, 00:16:45.541 "num_base_bdevs_operational": 3, 00:16:45.541 "base_bdevs_list": [ 00:16:45.541 { 00:16:45.541 "name": null, 00:16:45.541 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:45.541 "is_configured": false, 00:16:45.541 "data_offset": 0, 00:16:45.541 "data_size": 63488 00:16:45.541 }, 00:16:45.541 { 00:16:45.541 "name": "BaseBdev2", 00:16:45.541 "uuid": "812a063c-b5de-5864-a788-dead020d1c80", 00:16:45.541 "is_configured": true, 00:16:45.541 "data_offset": 2048, 00:16:45.541 "data_size": 63488 00:16:45.541 }, 00:16:45.541 { 00:16:45.541 "name": "BaseBdev3", 00:16:45.541 "uuid": "177d1e51-58a4-5d97-aeda-0d8261c848e1", 00:16:45.541 "is_configured": true, 00:16:45.541 "data_offset": 2048, 00:16:45.541 "data_size": 63488 00:16:45.541 }, 00:16:45.541 { 00:16:45.541 "name": "BaseBdev4", 00:16:45.541 "uuid": "f646152f-b104-5525-90e5-c474d6a5eafb", 00:16:45.541 "is_configured": true, 00:16:45.541 "data_offset": 2048, 00:16:45.541 "data_size": 63488 00:16:45.541 } 00:16:45.541 ] 00:16:45.541 }' 00:16:45.541 13:26:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:45.541 13:26:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:45.802 13:26:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:45.802 13:26:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:45.802 13:26:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:45.802 13:26:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:45.802 13:26:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:45.802 13:26:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:45.802 13:26:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.802 13:26:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:45.802 13:26:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:45.802 13:26:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.062 13:26:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:46.062 "name": "raid_bdev1", 00:16:46.062 "uuid": "64b49ab3-fd0a-4b57-ad35-22e5318313bd", 00:16:46.062 "strip_size_kb": 64, 00:16:46.062 "state": "online", 00:16:46.062 "raid_level": "raid5f", 00:16:46.062 "superblock": true, 00:16:46.062 "num_base_bdevs": 4, 00:16:46.062 "num_base_bdevs_discovered": 3, 00:16:46.062 "num_base_bdevs_operational": 3, 00:16:46.062 "base_bdevs_list": [ 00:16:46.063 { 00:16:46.063 "name": null, 00:16:46.063 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:46.063 "is_configured": false, 00:16:46.063 "data_offset": 0, 00:16:46.063 "data_size": 63488 00:16:46.063 }, 00:16:46.063 { 00:16:46.063 "name": "BaseBdev2", 00:16:46.063 "uuid": "812a063c-b5de-5864-a788-dead020d1c80", 00:16:46.063 "is_configured": true, 00:16:46.063 "data_offset": 2048, 00:16:46.063 "data_size": 63488 00:16:46.063 }, 00:16:46.063 { 00:16:46.063 "name": "BaseBdev3", 00:16:46.063 "uuid": "177d1e51-58a4-5d97-aeda-0d8261c848e1", 00:16:46.063 "is_configured": true, 00:16:46.063 "data_offset": 2048, 00:16:46.063 "data_size": 63488 00:16:46.063 }, 00:16:46.063 { 00:16:46.063 "name": "BaseBdev4", 00:16:46.063 "uuid": "f646152f-b104-5525-90e5-c474d6a5eafb", 00:16:46.063 "is_configured": true, 00:16:46.063 "data_offset": 2048, 00:16:46.063 "data_size": 63488 00:16:46.063 } 00:16:46.063 ] 00:16:46.063 }' 00:16:46.063 13:26:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:46.063 13:26:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:46.063 13:26:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:46.063 13:26:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:46.063 13:26:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:16:46.063 13:26:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.063 13:26:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.063 13:26:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.063 13:26:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:46.063 13:26:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.063 13:26:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.063 [2024-11-17 13:26:35.140297] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:46.063 [2024-11-17 13:26:35.140394] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:46.063 [2024-11-17 13:26:35.140449] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:16:46.063 [2024-11-17 13:26:35.140479] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:46.063 [2024-11-17 13:26:35.140944] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:46.063 [2024-11-17 13:26:35.141007] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:46.063 [2024-11-17 13:26:35.141123] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:16:46.063 [2024-11-17 13:26:35.141162] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:46.063 [2024-11-17 13:26:35.141203] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:46.063 [2024-11-17 13:26:35.141266] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:16:46.063 BaseBdev1 00:16:46.063 13:26:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.063 13:26:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:16:47.002 13:26:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:47.002 13:26:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:47.002 13:26:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:47.002 13:26:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:47.002 13:26:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:47.002 13:26:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:47.002 13:26:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:47.002 13:26:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:47.002 13:26:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:47.003 13:26:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:47.003 13:26:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:47.003 13:26:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:47.003 13:26:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.003 13:26:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:47.003 13:26:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.003 13:26:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:47.003 "name": "raid_bdev1", 00:16:47.003 "uuid": "64b49ab3-fd0a-4b57-ad35-22e5318313bd", 00:16:47.003 "strip_size_kb": 64, 00:16:47.003 "state": "online", 00:16:47.003 "raid_level": "raid5f", 00:16:47.003 "superblock": true, 00:16:47.003 "num_base_bdevs": 4, 00:16:47.003 "num_base_bdevs_discovered": 3, 00:16:47.003 "num_base_bdevs_operational": 3, 00:16:47.003 "base_bdevs_list": [ 00:16:47.003 { 00:16:47.003 "name": null, 00:16:47.003 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:47.003 "is_configured": false, 00:16:47.003 "data_offset": 0, 00:16:47.003 "data_size": 63488 00:16:47.003 }, 00:16:47.003 { 00:16:47.003 "name": "BaseBdev2", 00:16:47.003 "uuid": "812a063c-b5de-5864-a788-dead020d1c80", 00:16:47.003 "is_configured": true, 00:16:47.003 "data_offset": 2048, 00:16:47.003 "data_size": 63488 00:16:47.003 }, 00:16:47.003 { 00:16:47.003 "name": "BaseBdev3", 00:16:47.003 "uuid": "177d1e51-58a4-5d97-aeda-0d8261c848e1", 00:16:47.003 "is_configured": true, 00:16:47.003 "data_offset": 2048, 00:16:47.003 "data_size": 63488 00:16:47.003 }, 00:16:47.003 { 00:16:47.003 "name": "BaseBdev4", 00:16:47.003 "uuid": "f646152f-b104-5525-90e5-c474d6a5eafb", 00:16:47.003 "is_configured": true, 00:16:47.003 "data_offset": 2048, 00:16:47.003 "data_size": 63488 00:16:47.003 } 00:16:47.003 ] 00:16:47.003 }' 00:16:47.003 13:26:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:47.003 13:26:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:47.572 13:26:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:47.572 13:26:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:47.572 13:26:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:47.572 13:26:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:47.572 13:26:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:47.572 13:26:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:47.572 13:26:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.572 13:26:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:47.572 13:26:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:47.572 13:26:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.572 13:26:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:47.572 "name": "raid_bdev1", 00:16:47.572 "uuid": "64b49ab3-fd0a-4b57-ad35-22e5318313bd", 00:16:47.572 "strip_size_kb": 64, 00:16:47.572 "state": "online", 00:16:47.572 "raid_level": "raid5f", 00:16:47.572 "superblock": true, 00:16:47.572 "num_base_bdevs": 4, 00:16:47.572 "num_base_bdevs_discovered": 3, 00:16:47.572 "num_base_bdevs_operational": 3, 00:16:47.572 "base_bdevs_list": [ 00:16:47.572 { 00:16:47.572 "name": null, 00:16:47.572 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:47.572 "is_configured": false, 00:16:47.572 "data_offset": 0, 00:16:47.572 "data_size": 63488 00:16:47.572 }, 00:16:47.572 { 00:16:47.572 "name": "BaseBdev2", 00:16:47.572 "uuid": "812a063c-b5de-5864-a788-dead020d1c80", 00:16:47.572 "is_configured": true, 00:16:47.572 "data_offset": 2048, 00:16:47.572 "data_size": 63488 00:16:47.572 }, 00:16:47.572 { 00:16:47.572 "name": "BaseBdev3", 00:16:47.572 "uuid": "177d1e51-58a4-5d97-aeda-0d8261c848e1", 00:16:47.572 "is_configured": true, 00:16:47.572 "data_offset": 2048, 00:16:47.572 "data_size": 63488 00:16:47.572 }, 00:16:47.572 { 00:16:47.572 "name": "BaseBdev4", 00:16:47.572 "uuid": "f646152f-b104-5525-90e5-c474d6a5eafb", 00:16:47.572 "is_configured": true, 00:16:47.572 "data_offset": 2048, 00:16:47.572 "data_size": 63488 00:16:47.572 } 00:16:47.572 ] 00:16:47.572 }' 00:16:47.572 13:26:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:47.572 13:26:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:47.572 13:26:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:47.572 13:26:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:47.572 13:26:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:47.572 13:26:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:16:47.572 13:26:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:47.572 13:26:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:47.572 13:26:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:47.572 13:26:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:47.572 13:26:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:47.572 13:26:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:47.572 13:26:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.572 13:26:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:47.572 [2024-11-17 13:26:36.789758] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:47.572 [2024-11-17 13:26:36.789920] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:47.572 [2024-11-17 13:26:36.789937] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:47.572 request: 00:16:47.572 { 00:16:47.572 "base_bdev": "BaseBdev1", 00:16:47.572 "raid_bdev": "raid_bdev1", 00:16:47.572 "method": "bdev_raid_add_base_bdev", 00:16:47.572 "req_id": 1 00:16:47.572 } 00:16:47.572 Got JSON-RPC error response 00:16:47.572 response: 00:16:47.572 { 00:16:47.572 "code": -22, 00:16:47.572 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:16:47.572 } 00:16:47.832 13:26:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:47.832 13:26:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:16:47.832 13:26:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:47.832 13:26:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:47.832 13:26:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:47.832 13:26:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:16:48.775 13:26:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:48.775 13:26:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:48.775 13:26:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:48.775 13:26:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:48.775 13:26:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:48.776 13:26:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:48.776 13:26:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:48.776 13:26:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:48.776 13:26:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:48.776 13:26:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:48.776 13:26:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:48.776 13:26:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:48.776 13:26:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.776 13:26:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:48.776 13:26:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.776 13:26:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:48.776 "name": "raid_bdev1", 00:16:48.776 "uuid": "64b49ab3-fd0a-4b57-ad35-22e5318313bd", 00:16:48.776 "strip_size_kb": 64, 00:16:48.776 "state": "online", 00:16:48.776 "raid_level": "raid5f", 00:16:48.776 "superblock": true, 00:16:48.776 "num_base_bdevs": 4, 00:16:48.776 "num_base_bdevs_discovered": 3, 00:16:48.776 "num_base_bdevs_operational": 3, 00:16:48.776 "base_bdevs_list": [ 00:16:48.776 { 00:16:48.776 "name": null, 00:16:48.776 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:48.776 "is_configured": false, 00:16:48.776 "data_offset": 0, 00:16:48.776 "data_size": 63488 00:16:48.776 }, 00:16:48.776 { 00:16:48.776 "name": "BaseBdev2", 00:16:48.776 "uuid": "812a063c-b5de-5864-a788-dead020d1c80", 00:16:48.776 "is_configured": true, 00:16:48.776 "data_offset": 2048, 00:16:48.776 "data_size": 63488 00:16:48.776 }, 00:16:48.776 { 00:16:48.776 "name": "BaseBdev3", 00:16:48.776 "uuid": "177d1e51-58a4-5d97-aeda-0d8261c848e1", 00:16:48.776 "is_configured": true, 00:16:48.776 "data_offset": 2048, 00:16:48.776 "data_size": 63488 00:16:48.776 }, 00:16:48.776 { 00:16:48.776 "name": "BaseBdev4", 00:16:48.776 "uuid": "f646152f-b104-5525-90e5-c474d6a5eafb", 00:16:48.776 "is_configured": true, 00:16:48.776 "data_offset": 2048, 00:16:48.776 "data_size": 63488 00:16:48.776 } 00:16:48.776 ] 00:16:48.776 }' 00:16:48.776 13:26:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:48.776 13:26:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:49.036 13:26:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:49.036 13:26:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:49.036 13:26:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:49.036 13:26:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:49.036 13:26:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:49.036 13:26:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:49.036 13:26:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.036 13:26:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:49.036 13:26:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:49.036 13:26:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.297 13:26:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:49.297 "name": "raid_bdev1", 00:16:49.297 "uuid": "64b49ab3-fd0a-4b57-ad35-22e5318313bd", 00:16:49.297 "strip_size_kb": 64, 00:16:49.297 "state": "online", 00:16:49.297 "raid_level": "raid5f", 00:16:49.297 "superblock": true, 00:16:49.297 "num_base_bdevs": 4, 00:16:49.297 "num_base_bdevs_discovered": 3, 00:16:49.297 "num_base_bdevs_operational": 3, 00:16:49.297 "base_bdevs_list": [ 00:16:49.297 { 00:16:49.297 "name": null, 00:16:49.297 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:49.297 "is_configured": false, 00:16:49.297 "data_offset": 0, 00:16:49.297 "data_size": 63488 00:16:49.297 }, 00:16:49.297 { 00:16:49.297 "name": "BaseBdev2", 00:16:49.297 "uuid": "812a063c-b5de-5864-a788-dead020d1c80", 00:16:49.297 "is_configured": true, 00:16:49.297 "data_offset": 2048, 00:16:49.297 "data_size": 63488 00:16:49.297 }, 00:16:49.297 { 00:16:49.297 "name": "BaseBdev3", 00:16:49.297 "uuid": "177d1e51-58a4-5d97-aeda-0d8261c848e1", 00:16:49.297 "is_configured": true, 00:16:49.297 "data_offset": 2048, 00:16:49.297 "data_size": 63488 00:16:49.297 }, 00:16:49.297 { 00:16:49.297 "name": "BaseBdev4", 00:16:49.297 "uuid": "f646152f-b104-5525-90e5-c474d6a5eafb", 00:16:49.297 "is_configured": true, 00:16:49.297 "data_offset": 2048, 00:16:49.297 "data_size": 63488 00:16:49.297 } 00:16:49.297 ] 00:16:49.297 }' 00:16:49.297 13:26:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:49.297 13:26:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:49.297 13:26:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:49.297 13:26:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:49.297 13:26:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 84980 00:16:49.297 13:26:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 84980 ']' 00:16:49.297 13:26:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 84980 00:16:49.297 13:26:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:16:49.297 13:26:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:49.297 13:26:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84980 00:16:49.297 13:26:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:49.297 killing process with pid 84980 00:16:49.297 Received shutdown signal, test time was about 60.000000 seconds 00:16:49.297 00:16:49.297 Latency(us) 00:16:49.297 [2024-11-17T13:26:38.521Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:49.297 [2024-11-17T13:26:38.521Z] =================================================================================================================== 00:16:49.297 [2024-11-17T13:26:38.521Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:49.297 13:26:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:49.297 13:26:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84980' 00:16:49.297 13:26:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 84980 00:16:49.297 [2024-11-17 13:26:38.392006] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:49.297 [2024-11-17 13:26:38.392123] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:49.297 13:26:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 84980 00:16:49.297 [2024-11-17 13:26:38.392194] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:49.297 [2024-11-17 13:26:38.392207] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:16:49.868 [2024-11-17 13:26:38.856721] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:50.809 13:26:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:16:50.809 00:16:50.809 real 0m26.908s 00:16:50.809 user 0m33.791s 00:16:50.809 sys 0m3.081s 00:16:50.809 13:26:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:50.809 ************************************ 00:16:50.809 END TEST raid5f_rebuild_test_sb 00:16:50.809 ************************************ 00:16:50.809 13:26:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:50.809 13:26:39 bdev_raid -- bdev/bdev_raid.sh@995 -- # base_blocklen=4096 00:16:50.809 13:26:39 bdev_raid -- bdev/bdev_raid.sh@997 -- # run_test raid_state_function_test_sb_4k raid_state_function_test raid1 2 true 00:16:50.809 13:26:39 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:16:50.809 13:26:39 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:50.809 13:26:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:50.809 ************************************ 00:16:50.809 START TEST raid_state_function_test_sb_4k 00:16:50.809 ************************************ 00:16:50.809 13:26:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:16:50.809 13:26:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:16:50.809 13:26:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:16:50.809 13:26:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:16:50.809 13:26:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:16:50.809 13:26:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:16:50.809 13:26:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:50.809 13:26:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:16:50.809 13:26:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:50.809 13:26:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:50.809 13:26:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:16:50.809 13:26:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:50.809 13:26:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:50.809 13:26:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:16:50.809 13:26:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:16:50.809 13:26:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:16:50.809 13:26:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # local strip_size 00:16:50.809 13:26:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:16:50.809 13:26:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:16:50.809 13:26:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:16:50.809 13:26:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:16:50.809 13:26:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:16:50.809 13:26:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:16:50.809 Process raid pid: 85790 00:16:50.809 13:26:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@229 -- # raid_pid=85790 00:16:50.809 13:26:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 85790' 00:16:50.809 13:26:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:16:50.809 13:26:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@231 -- # waitforlisten 85790 00:16:50.809 13:26:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@835 -- # '[' -z 85790 ']' 00:16:50.809 13:26:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:50.809 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:50.809 13:26:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:50.809 13:26:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:50.809 13:26:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:50.809 13:26:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:51.069 [2024-11-17 13:26:40.075770] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:16:51.069 [2024-11-17 13:26:40.075973] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:51.069 [2024-11-17 13:26:40.257461] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:51.329 [2024-11-17 13:26:40.369350] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:51.590 [2024-11-17 13:26:40.559148] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:51.590 [2024-11-17 13:26:40.559248] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:51.850 13:26:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:51.850 13:26:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@868 -- # return 0 00:16:51.850 13:26:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:16:51.850 13:26:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.850 13:26:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:51.850 [2024-11-17 13:26:40.919645] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:51.850 [2024-11-17 13:26:40.919709] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:51.850 [2024-11-17 13:26:40.919719] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:51.850 [2024-11-17 13:26:40.919728] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:51.850 13:26:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.850 13:26:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:51.850 13:26:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:51.850 13:26:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:51.850 13:26:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:51.850 13:26:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:51.850 13:26:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:51.850 13:26:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:51.850 13:26:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:51.850 13:26:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:51.850 13:26:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:51.850 13:26:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:51.850 13:26:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:51.850 13:26:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.850 13:26:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:51.850 13:26:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.850 13:26:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:51.850 "name": "Existed_Raid", 00:16:51.850 "uuid": "1feaf07d-209b-4888-b384-230a44d6c1fe", 00:16:51.850 "strip_size_kb": 0, 00:16:51.850 "state": "configuring", 00:16:51.850 "raid_level": "raid1", 00:16:51.850 "superblock": true, 00:16:51.850 "num_base_bdevs": 2, 00:16:51.850 "num_base_bdevs_discovered": 0, 00:16:51.851 "num_base_bdevs_operational": 2, 00:16:51.851 "base_bdevs_list": [ 00:16:51.851 { 00:16:51.851 "name": "BaseBdev1", 00:16:51.851 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:51.851 "is_configured": false, 00:16:51.851 "data_offset": 0, 00:16:51.851 "data_size": 0 00:16:51.851 }, 00:16:51.851 { 00:16:51.851 "name": "BaseBdev2", 00:16:51.851 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:51.851 "is_configured": false, 00:16:51.851 "data_offset": 0, 00:16:51.851 "data_size": 0 00:16:51.851 } 00:16:51.851 ] 00:16:51.851 }' 00:16:51.851 13:26:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:51.851 13:26:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:52.421 13:26:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:52.421 13:26:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.421 13:26:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:52.421 [2024-11-17 13:26:41.362837] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:52.421 [2024-11-17 13:26:41.362874] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:16:52.421 13:26:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.421 13:26:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:16:52.421 13:26:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.421 13:26:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:52.421 [2024-11-17 13:26:41.370822] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:52.421 [2024-11-17 13:26:41.370904] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:52.421 [2024-11-17 13:26:41.370932] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:52.421 [2024-11-17 13:26:41.370956] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:52.421 13:26:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.421 13:26:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1 00:16:52.421 13:26:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.421 13:26:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:52.421 [2024-11-17 13:26:41.414195] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:52.421 BaseBdev1 00:16:52.421 13:26:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.421 13:26:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:16:52.421 13:26:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:16:52.421 13:26:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:52.421 13:26:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # local i 00:16:52.421 13:26:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:52.422 13:26:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:52.422 13:26:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:52.422 13:26:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.422 13:26:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:52.422 13:26:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.422 13:26:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:52.422 13:26:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.422 13:26:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:52.422 [ 00:16:52.422 { 00:16:52.422 "name": "BaseBdev1", 00:16:52.422 "aliases": [ 00:16:52.422 "dcc3803a-a088-49d7-bc1a-cc05efd5fd2b" 00:16:52.422 ], 00:16:52.422 "product_name": "Malloc disk", 00:16:52.422 "block_size": 4096, 00:16:52.422 "num_blocks": 8192, 00:16:52.422 "uuid": "dcc3803a-a088-49d7-bc1a-cc05efd5fd2b", 00:16:52.422 "assigned_rate_limits": { 00:16:52.422 "rw_ios_per_sec": 0, 00:16:52.422 "rw_mbytes_per_sec": 0, 00:16:52.422 "r_mbytes_per_sec": 0, 00:16:52.422 "w_mbytes_per_sec": 0 00:16:52.422 }, 00:16:52.422 "claimed": true, 00:16:52.422 "claim_type": "exclusive_write", 00:16:52.422 "zoned": false, 00:16:52.422 "supported_io_types": { 00:16:52.422 "read": true, 00:16:52.422 "write": true, 00:16:52.422 "unmap": true, 00:16:52.422 "flush": true, 00:16:52.422 "reset": true, 00:16:52.422 "nvme_admin": false, 00:16:52.422 "nvme_io": false, 00:16:52.422 "nvme_io_md": false, 00:16:52.422 "write_zeroes": true, 00:16:52.422 "zcopy": true, 00:16:52.422 "get_zone_info": false, 00:16:52.422 "zone_management": false, 00:16:52.422 "zone_append": false, 00:16:52.422 "compare": false, 00:16:52.422 "compare_and_write": false, 00:16:52.422 "abort": true, 00:16:52.422 "seek_hole": false, 00:16:52.422 "seek_data": false, 00:16:52.422 "copy": true, 00:16:52.422 "nvme_iov_md": false 00:16:52.422 }, 00:16:52.422 "memory_domains": [ 00:16:52.422 { 00:16:52.422 "dma_device_id": "system", 00:16:52.422 "dma_device_type": 1 00:16:52.422 }, 00:16:52.422 { 00:16:52.422 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:52.422 "dma_device_type": 2 00:16:52.422 } 00:16:52.422 ], 00:16:52.422 "driver_specific": {} 00:16:52.422 } 00:16:52.422 ] 00:16:52.422 13:26:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.422 13:26:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@911 -- # return 0 00:16:52.422 13:26:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:52.422 13:26:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:52.422 13:26:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:52.422 13:26:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:52.422 13:26:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:52.422 13:26:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:52.422 13:26:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:52.422 13:26:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:52.422 13:26:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:52.422 13:26:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:52.422 13:26:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:52.422 13:26:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:52.422 13:26:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.422 13:26:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:52.422 13:26:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.422 13:26:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:52.422 "name": "Existed_Raid", 00:16:52.422 "uuid": "07190687-f0bb-43e1-b288-d7c21bfb90f1", 00:16:52.422 "strip_size_kb": 0, 00:16:52.422 "state": "configuring", 00:16:52.422 "raid_level": "raid1", 00:16:52.422 "superblock": true, 00:16:52.422 "num_base_bdevs": 2, 00:16:52.422 "num_base_bdevs_discovered": 1, 00:16:52.422 "num_base_bdevs_operational": 2, 00:16:52.422 "base_bdevs_list": [ 00:16:52.422 { 00:16:52.422 "name": "BaseBdev1", 00:16:52.422 "uuid": "dcc3803a-a088-49d7-bc1a-cc05efd5fd2b", 00:16:52.422 "is_configured": true, 00:16:52.422 "data_offset": 256, 00:16:52.422 "data_size": 7936 00:16:52.422 }, 00:16:52.422 { 00:16:52.422 "name": "BaseBdev2", 00:16:52.422 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:52.422 "is_configured": false, 00:16:52.422 "data_offset": 0, 00:16:52.422 "data_size": 0 00:16:52.422 } 00:16:52.422 ] 00:16:52.422 }' 00:16:52.422 13:26:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:52.422 13:26:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:52.682 13:26:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:52.682 13:26:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.682 13:26:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:52.682 [2024-11-17 13:26:41.865444] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:52.682 [2024-11-17 13:26:41.865540] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:16:52.682 13:26:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.682 13:26:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:16:52.682 13:26:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.682 13:26:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:52.682 [2024-11-17 13:26:41.877476] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:52.682 [2024-11-17 13:26:41.879277] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:52.682 [2024-11-17 13:26:41.879318] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:52.682 13:26:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.682 13:26:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:16:52.682 13:26:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:52.682 13:26:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:52.682 13:26:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:52.682 13:26:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:52.682 13:26:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:52.682 13:26:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:52.682 13:26:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:52.682 13:26:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:52.682 13:26:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:52.682 13:26:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:52.682 13:26:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:52.682 13:26:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:52.682 13:26:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:52.682 13:26:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.682 13:26:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:52.942 13:26:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.942 13:26:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:52.942 "name": "Existed_Raid", 00:16:52.942 "uuid": "09d50059-144f-4f81-9da0-7c763dc27639", 00:16:52.942 "strip_size_kb": 0, 00:16:52.942 "state": "configuring", 00:16:52.942 "raid_level": "raid1", 00:16:52.942 "superblock": true, 00:16:52.942 "num_base_bdevs": 2, 00:16:52.942 "num_base_bdevs_discovered": 1, 00:16:52.942 "num_base_bdevs_operational": 2, 00:16:52.942 "base_bdevs_list": [ 00:16:52.942 { 00:16:52.942 "name": "BaseBdev1", 00:16:52.942 "uuid": "dcc3803a-a088-49d7-bc1a-cc05efd5fd2b", 00:16:52.942 "is_configured": true, 00:16:52.942 "data_offset": 256, 00:16:52.942 "data_size": 7936 00:16:52.942 }, 00:16:52.942 { 00:16:52.942 "name": "BaseBdev2", 00:16:52.942 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:52.942 "is_configured": false, 00:16:52.942 "data_offset": 0, 00:16:52.942 "data_size": 0 00:16:52.942 } 00:16:52.942 ] 00:16:52.942 }' 00:16:52.942 13:26:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:52.942 13:26:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:53.203 13:26:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2 00:16:53.203 13:26:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.203 13:26:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:53.203 [2024-11-17 13:26:42.348799] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:53.203 [2024-11-17 13:26:42.349164] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:53.203 [2024-11-17 13:26:42.349236] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:53.203 [2024-11-17 13:26:42.349562] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:16:53.203 BaseBdev2 00:16:53.203 [2024-11-17 13:26:42.349759] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:53.203 [2024-11-17 13:26:42.349809] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:16:53.203 [2024-11-17 13:26:42.350012] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:53.203 13:26:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.203 13:26:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:16:53.203 13:26:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:16:53.203 13:26:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:53.203 13:26:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # local i 00:16:53.203 13:26:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:53.203 13:26:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:53.203 13:26:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:53.203 13:26:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.203 13:26:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:53.203 13:26:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.203 13:26:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:53.203 13:26:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.203 13:26:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:53.203 [ 00:16:53.203 { 00:16:53.203 "name": "BaseBdev2", 00:16:53.203 "aliases": [ 00:16:53.203 "2a652e27-3a98-4d1c-8d2a-33b871fc55de" 00:16:53.203 ], 00:16:53.203 "product_name": "Malloc disk", 00:16:53.203 "block_size": 4096, 00:16:53.203 "num_blocks": 8192, 00:16:53.203 "uuid": "2a652e27-3a98-4d1c-8d2a-33b871fc55de", 00:16:53.203 "assigned_rate_limits": { 00:16:53.203 "rw_ios_per_sec": 0, 00:16:53.203 "rw_mbytes_per_sec": 0, 00:16:53.203 "r_mbytes_per_sec": 0, 00:16:53.203 "w_mbytes_per_sec": 0 00:16:53.203 }, 00:16:53.203 "claimed": true, 00:16:53.203 "claim_type": "exclusive_write", 00:16:53.203 "zoned": false, 00:16:53.203 "supported_io_types": { 00:16:53.203 "read": true, 00:16:53.203 "write": true, 00:16:53.203 "unmap": true, 00:16:53.203 "flush": true, 00:16:53.203 "reset": true, 00:16:53.203 "nvme_admin": false, 00:16:53.203 "nvme_io": false, 00:16:53.203 "nvme_io_md": false, 00:16:53.203 "write_zeroes": true, 00:16:53.203 "zcopy": true, 00:16:53.203 "get_zone_info": false, 00:16:53.203 "zone_management": false, 00:16:53.203 "zone_append": false, 00:16:53.203 "compare": false, 00:16:53.203 "compare_and_write": false, 00:16:53.203 "abort": true, 00:16:53.203 "seek_hole": false, 00:16:53.203 "seek_data": false, 00:16:53.203 "copy": true, 00:16:53.203 "nvme_iov_md": false 00:16:53.203 }, 00:16:53.203 "memory_domains": [ 00:16:53.203 { 00:16:53.203 "dma_device_id": "system", 00:16:53.203 "dma_device_type": 1 00:16:53.203 }, 00:16:53.203 { 00:16:53.203 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:53.203 "dma_device_type": 2 00:16:53.203 } 00:16:53.203 ], 00:16:53.203 "driver_specific": {} 00:16:53.203 } 00:16:53.203 ] 00:16:53.203 13:26:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.203 13:26:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@911 -- # return 0 00:16:53.203 13:26:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:53.203 13:26:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:53.203 13:26:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:16:53.203 13:26:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:53.203 13:26:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:53.204 13:26:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:53.204 13:26:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:53.204 13:26:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:53.204 13:26:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:53.204 13:26:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:53.204 13:26:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:53.204 13:26:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:53.204 13:26:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:53.204 13:26:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.204 13:26:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:53.204 13:26:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:53.204 13:26:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.463 13:26:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:53.463 "name": "Existed_Raid", 00:16:53.463 "uuid": "09d50059-144f-4f81-9da0-7c763dc27639", 00:16:53.463 "strip_size_kb": 0, 00:16:53.463 "state": "online", 00:16:53.463 "raid_level": "raid1", 00:16:53.463 "superblock": true, 00:16:53.463 "num_base_bdevs": 2, 00:16:53.463 "num_base_bdevs_discovered": 2, 00:16:53.463 "num_base_bdevs_operational": 2, 00:16:53.463 "base_bdevs_list": [ 00:16:53.463 { 00:16:53.463 "name": "BaseBdev1", 00:16:53.463 "uuid": "dcc3803a-a088-49d7-bc1a-cc05efd5fd2b", 00:16:53.463 "is_configured": true, 00:16:53.463 "data_offset": 256, 00:16:53.463 "data_size": 7936 00:16:53.463 }, 00:16:53.463 { 00:16:53.463 "name": "BaseBdev2", 00:16:53.463 "uuid": "2a652e27-3a98-4d1c-8d2a-33b871fc55de", 00:16:53.463 "is_configured": true, 00:16:53.463 "data_offset": 256, 00:16:53.463 "data_size": 7936 00:16:53.463 } 00:16:53.463 ] 00:16:53.463 }' 00:16:53.463 13:26:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:53.463 13:26:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:53.723 13:26:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:16:53.723 13:26:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:53.723 13:26:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:53.723 13:26:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:53.723 13:26:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local name 00:16:53.723 13:26:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:53.723 13:26:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:53.723 13:26:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:53.723 13:26:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.723 13:26:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:53.723 [2024-11-17 13:26:42.840261] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:53.723 13:26:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.723 13:26:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:53.723 "name": "Existed_Raid", 00:16:53.723 "aliases": [ 00:16:53.723 "09d50059-144f-4f81-9da0-7c763dc27639" 00:16:53.723 ], 00:16:53.723 "product_name": "Raid Volume", 00:16:53.723 "block_size": 4096, 00:16:53.723 "num_blocks": 7936, 00:16:53.723 "uuid": "09d50059-144f-4f81-9da0-7c763dc27639", 00:16:53.723 "assigned_rate_limits": { 00:16:53.723 "rw_ios_per_sec": 0, 00:16:53.723 "rw_mbytes_per_sec": 0, 00:16:53.723 "r_mbytes_per_sec": 0, 00:16:53.723 "w_mbytes_per_sec": 0 00:16:53.723 }, 00:16:53.723 "claimed": false, 00:16:53.723 "zoned": false, 00:16:53.723 "supported_io_types": { 00:16:53.723 "read": true, 00:16:53.723 "write": true, 00:16:53.723 "unmap": false, 00:16:53.723 "flush": false, 00:16:53.723 "reset": true, 00:16:53.723 "nvme_admin": false, 00:16:53.723 "nvme_io": false, 00:16:53.723 "nvme_io_md": false, 00:16:53.723 "write_zeroes": true, 00:16:53.723 "zcopy": false, 00:16:53.723 "get_zone_info": false, 00:16:53.723 "zone_management": false, 00:16:53.723 "zone_append": false, 00:16:53.723 "compare": false, 00:16:53.723 "compare_and_write": false, 00:16:53.723 "abort": false, 00:16:53.723 "seek_hole": false, 00:16:53.723 "seek_data": false, 00:16:53.723 "copy": false, 00:16:53.723 "nvme_iov_md": false 00:16:53.723 }, 00:16:53.723 "memory_domains": [ 00:16:53.723 { 00:16:53.723 "dma_device_id": "system", 00:16:53.723 "dma_device_type": 1 00:16:53.723 }, 00:16:53.723 { 00:16:53.723 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:53.723 "dma_device_type": 2 00:16:53.723 }, 00:16:53.723 { 00:16:53.723 "dma_device_id": "system", 00:16:53.723 "dma_device_type": 1 00:16:53.723 }, 00:16:53.723 { 00:16:53.723 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:53.723 "dma_device_type": 2 00:16:53.723 } 00:16:53.723 ], 00:16:53.723 "driver_specific": { 00:16:53.723 "raid": { 00:16:53.723 "uuid": "09d50059-144f-4f81-9da0-7c763dc27639", 00:16:53.723 "strip_size_kb": 0, 00:16:53.723 "state": "online", 00:16:53.723 "raid_level": "raid1", 00:16:53.723 "superblock": true, 00:16:53.723 "num_base_bdevs": 2, 00:16:53.723 "num_base_bdevs_discovered": 2, 00:16:53.723 "num_base_bdevs_operational": 2, 00:16:53.723 "base_bdevs_list": [ 00:16:53.723 { 00:16:53.723 "name": "BaseBdev1", 00:16:53.723 "uuid": "dcc3803a-a088-49d7-bc1a-cc05efd5fd2b", 00:16:53.723 "is_configured": true, 00:16:53.723 "data_offset": 256, 00:16:53.723 "data_size": 7936 00:16:53.723 }, 00:16:53.723 { 00:16:53.723 "name": "BaseBdev2", 00:16:53.723 "uuid": "2a652e27-3a98-4d1c-8d2a-33b871fc55de", 00:16:53.723 "is_configured": true, 00:16:53.723 "data_offset": 256, 00:16:53.723 "data_size": 7936 00:16:53.723 } 00:16:53.723 ] 00:16:53.723 } 00:16:53.723 } 00:16:53.723 }' 00:16:53.723 13:26:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:53.723 13:26:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:16:53.723 BaseBdev2' 00:16:53.723 13:26:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:53.985 13:26:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:16:53.986 13:26:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:53.986 13:26:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:16:53.986 13:26:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:53.986 13:26:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.986 13:26:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:53.986 13:26:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.986 13:26:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:16:53.986 13:26:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:16:53.986 13:26:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:53.986 13:26:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:53.986 13:26:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.986 13:26:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:53.986 13:26:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:53.986 13:26:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.986 13:26:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:16:53.986 13:26:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:16:53.986 13:26:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:53.986 13:26:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.986 13:26:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:53.986 [2024-11-17 13:26:43.091614] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:53.986 13:26:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.986 13:26:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@260 -- # local expected_state 00:16:53.986 13:26:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:16:53.986 13:26:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:53.986 13:26:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:16:53.986 13:26:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:16:53.986 13:26:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:16:53.986 13:26:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:53.986 13:26:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:53.986 13:26:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:53.986 13:26:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:53.986 13:26:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:53.986 13:26:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:53.986 13:26:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:53.986 13:26:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:53.986 13:26:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:53.986 13:26:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:53.986 13:26:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:53.986 13:26:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.986 13:26:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:53.986 13:26:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.255 13:26:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:54.255 "name": "Existed_Raid", 00:16:54.255 "uuid": "09d50059-144f-4f81-9da0-7c763dc27639", 00:16:54.255 "strip_size_kb": 0, 00:16:54.255 "state": "online", 00:16:54.255 "raid_level": "raid1", 00:16:54.255 "superblock": true, 00:16:54.255 "num_base_bdevs": 2, 00:16:54.255 "num_base_bdevs_discovered": 1, 00:16:54.255 "num_base_bdevs_operational": 1, 00:16:54.255 "base_bdevs_list": [ 00:16:54.255 { 00:16:54.255 "name": null, 00:16:54.255 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:54.255 "is_configured": false, 00:16:54.255 "data_offset": 0, 00:16:54.255 "data_size": 7936 00:16:54.255 }, 00:16:54.255 { 00:16:54.255 "name": "BaseBdev2", 00:16:54.255 "uuid": "2a652e27-3a98-4d1c-8d2a-33b871fc55de", 00:16:54.256 "is_configured": true, 00:16:54.256 "data_offset": 256, 00:16:54.256 "data_size": 7936 00:16:54.256 } 00:16:54.256 ] 00:16:54.256 }' 00:16:54.256 13:26:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:54.256 13:26:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:54.528 13:26:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:16:54.528 13:26:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:54.528 13:26:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:54.528 13:26:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:54.528 13:26:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.528 13:26:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:54.528 13:26:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.528 13:26:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:54.528 13:26:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:54.528 13:26:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:16:54.528 13:26:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.528 13:26:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:54.528 [2024-11-17 13:26:43.646595] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:54.528 [2024-11-17 13:26:43.646740] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:54.528 [2024-11-17 13:26:43.738485] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:54.528 [2024-11-17 13:26:43.738544] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:54.528 [2024-11-17 13:26:43.738556] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:16:54.528 13:26:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.528 13:26:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:54.528 13:26:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:54.528 13:26:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:54.528 13:26:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:16:54.528 13:26:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.528 13:26:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:54.788 13:26:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.788 13:26:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:16:54.788 13:26:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:16:54.788 13:26:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:16:54.788 13:26:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@326 -- # killprocess 85790 00:16:54.788 13:26:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@954 -- # '[' -z 85790 ']' 00:16:54.788 13:26:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@958 -- # kill -0 85790 00:16:54.788 13:26:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@959 -- # uname 00:16:54.788 13:26:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:54.788 13:26:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85790 00:16:54.788 killing process with pid 85790 00:16:54.788 13:26:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:54.788 13:26:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:54.788 13:26:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85790' 00:16:54.788 13:26:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@973 -- # kill 85790 00:16:54.788 [2024-11-17 13:26:43.826758] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:54.788 13:26:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@978 -- # wait 85790 00:16:54.788 [2024-11-17 13:26:43.842414] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:55.729 ************************************ 00:16:55.729 END TEST raid_state_function_test_sb_4k 00:16:55.729 ************************************ 00:16:55.729 13:26:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@328 -- # return 0 00:16:55.729 00:16:55.729 real 0m4.931s 00:16:55.729 user 0m7.104s 00:16:55.729 sys 0m0.851s 00:16:55.729 13:26:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:55.729 13:26:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:55.989 13:26:44 bdev_raid -- bdev/bdev_raid.sh@998 -- # run_test raid_superblock_test_4k raid_superblock_test raid1 2 00:16:55.989 13:26:44 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:16:55.989 13:26:44 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:55.989 13:26:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:55.989 ************************************ 00:16:55.989 START TEST raid_superblock_test_4k 00:16:55.989 ************************************ 00:16:55.989 13:26:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:16:55.989 13:26:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:16:55.989 13:26:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:16:55.989 13:26:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:16:55.989 13:26:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:16:55.989 13:26:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:16:55.989 13:26:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:16:55.989 13:26:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:16:55.989 13:26:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:16:55.989 13:26:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:16:55.989 13:26:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@399 -- # local strip_size 00:16:55.989 13:26:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:16:55.989 13:26:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:16:55.989 13:26:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:16:55.989 13:26:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:16:55.989 13:26:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:16:55.989 13:26:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # raid_pid=86038 00:16:55.989 13:26:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:16:55.989 13:26:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@413 -- # waitforlisten 86038 00:16:55.989 13:26:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@835 -- # '[' -z 86038 ']' 00:16:55.989 13:26:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:55.989 13:26:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:55.989 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:55.989 13:26:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:55.989 13:26:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:55.989 13:26:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:55.989 [2024-11-17 13:26:45.074803] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:16:55.990 [2024-11-17 13:26:45.074915] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86038 ] 00:16:56.249 [2024-11-17 13:26:45.246302] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:56.249 [2024-11-17 13:26:45.355211] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:56.509 [2024-11-17 13:26:45.549765] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:56.509 [2024-11-17 13:26:45.549888] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:56.769 13:26:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:56.769 13:26:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@868 -- # return 0 00:16:56.769 13:26:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:16:56.769 13:26:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:56.769 13:26:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:16:56.769 13:26:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:16:56.769 13:26:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:16:56.769 13:26:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:56.769 13:26:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:56.769 13:26:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:56.769 13:26:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc1 00:16:56.769 13:26:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.769 13:26:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:56.769 malloc1 00:16:56.769 13:26:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.769 13:26:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:56.769 13:26:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.769 13:26:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:56.769 [2024-11-17 13:26:45.937006] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:56.769 [2024-11-17 13:26:45.937161] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:56.769 [2024-11-17 13:26:45.937202] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:56.769 [2024-11-17 13:26:45.937241] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:56.769 [2024-11-17 13:26:45.939382] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:56.769 [2024-11-17 13:26:45.939465] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:56.769 pt1 00:16:56.769 13:26:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.769 13:26:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:56.769 13:26:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:56.769 13:26:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:16:56.769 13:26:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:16:56.769 13:26:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:16:56.769 13:26:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:56.769 13:26:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:56.769 13:26:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:56.769 13:26:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc2 00:16:56.769 13:26:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.769 13:26:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:56.769 malloc2 00:16:56.770 13:26:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.770 13:26:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:56.770 13:26:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.770 13:26:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:57.030 [2024-11-17 13:26:45.994856] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:57.030 [2024-11-17 13:26:45.994914] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:57.030 [2024-11-17 13:26:45.994935] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:57.030 [2024-11-17 13:26:45.994943] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:57.030 [2024-11-17 13:26:45.996967] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:57.030 [2024-11-17 13:26:45.997002] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:57.030 pt2 00:16:57.030 13:26:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.030 13:26:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:57.030 13:26:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:57.030 13:26:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:16:57.030 13:26:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.030 13:26:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:57.030 [2024-11-17 13:26:46.006882] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:57.030 [2024-11-17 13:26:46.008650] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:57.030 [2024-11-17 13:26:46.008812] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:57.030 [2024-11-17 13:26:46.008828] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:57.030 [2024-11-17 13:26:46.009032] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:16:57.030 [2024-11-17 13:26:46.009167] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:57.030 [2024-11-17 13:26:46.009180] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:57.030 [2024-11-17 13:26:46.009332] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:57.030 13:26:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.030 13:26:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:57.030 13:26:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:57.030 13:26:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:57.030 13:26:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:57.030 13:26:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:57.030 13:26:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:57.030 13:26:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:57.030 13:26:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:57.030 13:26:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:57.030 13:26:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:57.030 13:26:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:57.030 13:26:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:57.030 13:26:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.030 13:26:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:57.030 13:26:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.030 13:26:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:57.030 "name": "raid_bdev1", 00:16:57.030 "uuid": "86e5e0d4-0936-4313-9f5e-35bf16f3846a", 00:16:57.030 "strip_size_kb": 0, 00:16:57.030 "state": "online", 00:16:57.030 "raid_level": "raid1", 00:16:57.030 "superblock": true, 00:16:57.030 "num_base_bdevs": 2, 00:16:57.030 "num_base_bdevs_discovered": 2, 00:16:57.030 "num_base_bdevs_operational": 2, 00:16:57.030 "base_bdevs_list": [ 00:16:57.030 { 00:16:57.030 "name": "pt1", 00:16:57.030 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:57.030 "is_configured": true, 00:16:57.030 "data_offset": 256, 00:16:57.030 "data_size": 7936 00:16:57.030 }, 00:16:57.030 { 00:16:57.030 "name": "pt2", 00:16:57.030 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:57.030 "is_configured": true, 00:16:57.030 "data_offset": 256, 00:16:57.030 "data_size": 7936 00:16:57.030 } 00:16:57.030 ] 00:16:57.030 }' 00:16:57.030 13:26:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:57.030 13:26:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:57.290 13:26:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:16:57.290 13:26:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:57.290 13:26:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:57.290 13:26:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:57.290 13:26:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:16:57.290 13:26:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:57.290 13:26:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:57.290 13:26:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:57.290 13:26:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.290 13:26:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:57.290 [2024-11-17 13:26:46.446398] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:57.290 13:26:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.290 13:26:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:57.290 "name": "raid_bdev1", 00:16:57.290 "aliases": [ 00:16:57.290 "86e5e0d4-0936-4313-9f5e-35bf16f3846a" 00:16:57.290 ], 00:16:57.290 "product_name": "Raid Volume", 00:16:57.290 "block_size": 4096, 00:16:57.290 "num_blocks": 7936, 00:16:57.290 "uuid": "86e5e0d4-0936-4313-9f5e-35bf16f3846a", 00:16:57.290 "assigned_rate_limits": { 00:16:57.290 "rw_ios_per_sec": 0, 00:16:57.290 "rw_mbytes_per_sec": 0, 00:16:57.290 "r_mbytes_per_sec": 0, 00:16:57.290 "w_mbytes_per_sec": 0 00:16:57.290 }, 00:16:57.290 "claimed": false, 00:16:57.290 "zoned": false, 00:16:57.290 "supported_io_types": { 00:16:57.290 "read": true, 00:16:57.290 "write": true, 00:16:57.290 "unmap": false, 00:16:57.290 "flush": false, 00:16:57.290 "reset": true, 00:16:57.290 "nvme_admin": false, 00:16:57.290 "nvme_io": false, 00:16:57.290 "nvme_io_md": false, 00:16:57.290 "write_zeroes": true, 00:16:57.290 "zcopy": false, 00:16:57.290 "get_zone_info": false, 00:16:57.290 "zone_management": false, 00:16:57.290 "zone_append": false, 00:16:57.290 "compare": false, 00:16:57.290 "compare_and_write": false, 00:16:57.291 "abort": false, 00:16:57.291 "seek_hole": false, 00:16:57.291 "seek_data": false, 00:16:57.291 "copy": false, 00:16:57.291 "nvme_iov_md": false 00:16:57.291 }, 00:16:57.291 "memory_domains": [ 00:16:57.291 { 00:16:57.291 "dma_device_id": "system", 00:16:57.291 "dma_device_type": 1 00:16:57.291 }, 00:16:57.291 { 00:16:57.291 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:57.291 "dma_device_type": 2 00:16:57.291 }, 00:16:57.291 { 00:16:57.291 "dma_device_id": "system", 00:16:57.291 "dma_device_type": 1 00:16:57.291 }, 00:16:57.291 { 00:16:57.291 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:57.291 "dma_device_type": 2 00:16:57.291 } 00:16:57.291 ], 00:16:57.291 "driver_specific": { 00:16:57.291 "raid": { 00:16:57.291 "uuid": "86e5e0d4-0936-4313-9f5e-35bf16f3846a", 00:16:57.291 "strip_size_kb": 0, 00:16:57.291 "state": "online", 00:16:57.291 "raid_level": "raid1", 00:16:57.291 "superblock": true, 00:16:57.291 "num_base_bdevs": 2, 00:16:57.291 "num_base_bdevs_discovered": 2, 00:16:57.291 "num_base_bdevs_operational": 2, 00:16:57.291 "base_bdevs_list": [ 00:16:57.291 { 00:16:57.291 "name": "pt1", 00:16:57.291 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:57.291 "is_configured": true, 00:16:57.291 "data_offset": 256, 00:16:57.291 "data_size": 7936 00:16:57.291 }, 00:16:57.291 { 00:16:57.291 "name": "pt2", 00:16:57.291 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:57.291 "is_configured": true, 00:16:57.291 "data_offset": 256, 00:16:57.291 "data_size": 7936 00:16:57.291 } 00:16:57.291 ] 00:16:57.291 } 00:16:57.291 } 00:16:57.291 }' 00:16:57.291 13:26:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:57.551 13:26:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:57.551 pt2' 00:16:57.551 13:26:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:57.551 13:26:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:16:57.551 13:26:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:57.551 13:26:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:57.551 13:26:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:57.551 13:26:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.551 13:26:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:57.551 13:26:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.551 13:26:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:16:57.551 13:26:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:16:57.551 13:26:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:57.551 13:26:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:57.551 13:26:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:57.551 13:26:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.551 13:26:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:57.551 13:26:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.551 13:26:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:16:57.551 13:26:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:16:57.551 13:26:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:57.551 13:26:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.551 13:26:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:57.551 13:26:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:16:57.551 [2024-11-17 13:26:46.661965] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:57.551 13:26:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.551 13:26:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=86e5e0d4-0936-4313-9f5e-35bf16f3846a 00:16:57.551 13:26:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@436 -- # '[' -z 86e5e0d4-0936-4313-9f5e-35bf16f3846a ']' 00:16:57.551 13:26:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:57.551 13:26:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.551 13:26:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:57.551 [2024-11-17 13:26:46.709649] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:57.551 [2024-11-17 13:26:46.709726] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:57.551 [2024-11-17 13:26:46.709811] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:57.551 [2024-11-17 13:26:46.709878] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:57.551 [2024-11-17 13:26:46.709929] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:57.551 13:26:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.551 13:26:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:57.551 13:26:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.551 13:26:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:57.551 13:26:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:16:57.551 13:26:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.551 13:26:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:16:57.551 13:26:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:16:57.551 13:26:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:57.551 13:26:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:16:57.551 13:26:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.551 13:26:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:57.812 13:26:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.812 13:26:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:57.812 13:26:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:16:57.812 13:26:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.812 13:26:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:57.812 13:26:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.812 13:26:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:16:57.812 13:26:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:16:57.812 13:26:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.812 13:26:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:57.812 13:26:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.812 13:26:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:16:57.812 13:26:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:16:57.812 13:26:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@652 -- # local es=0 00:16:57.812 13:26:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:16:57.812 13:26:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:57.812 13:26:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:57.812 13:26:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:57.812 13:26:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:57.812 13:26:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:16:57.812 13:26:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.812 13:26:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:57.812 [2024-11-17 13:26:46.849455] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:16:57.812 [2024-11-17 13:26:46.851414] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:16:57.812 [2024-11-17 13:26:46.851522] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:16:57.812 [2024-11-17 13:26:46.851609] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:16:57.812 [2024-11-17 13:26:46.851649] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:57.812 [2024-11-17 13:26:46.851672] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:16:57.812 request: 00:16:57.812 { 00:16:57.812 "name": "raid_bdev1", 00:16:57.812 "raid_level": "raid1", 00:16:57.812 "base_bdevs": [ 00:16:57.812 "malloc1", 00:16:57.812 "malloc2" 00:16:57.812 ], 00:16:57.812 "superblock": false, 00:16:57.812 "method": "bdev_raid_create", 00:16:57.812 "req_id": 1 00:16:57.812 } 00:16:57.812 Got JSON-RPC error response 00:16:57.812 response: 00:16:57.812 { 00:16:57.812 "code": -17, 00:16:57.812 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:16:57.812 } 00:16:57.812 13:26:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:57.812 13:26:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@655 -- # es=1 00:16:57.812 13:26:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:57.812 13:26:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:57.812 13:26:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:57.812 13:26:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:57.812 13:26:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:16:57.812 13:26:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.812 13:26:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:57.812 13:26:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.812 13:26:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:16:57.812 13:26:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:16:57.812 13:26:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:57.812 13:26:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.812 13:26:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:57.812 [2024-11-17 13:26:46.905320] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:57.812 [2024-11-17 13:26:46.905367] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:57.812 [2024-11-17 13:26:46.905382] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:57.812 [2024-11-17 13:26:46.905392] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:57.812 [2024-11-17 13:26:46.907388] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:57.812 [2024-11-17 13:26:46.907427] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:57.812 [2024-11-17 13:26:46.907486] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:57.813 [2024-11-17 13:26:46.907542] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:57.813 pt1 00:16:57.813 13:26:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.813 13:26:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:16:57.813 13:26:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:57.813 13:26:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:57.813 13:26:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:57.813 13:26:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:57.813 13:26:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:57.813 13:26:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:57.813 13:26:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:57.813 13:26:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:57.813 13:26:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:57.813 13:26:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:57.813 13:26:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:57.813 13:26:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.813 13:26:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:57.813 13:26:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.813 13:26:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:57.813 "name": "raid_bdev1", 00:16:57.813 "uuid": "86e5e0d4-0936-4313-9f5e-35bf16f3846a", 00:16:57.813 "strip_size_kb": 0, 00:16:57.813 "state": "configuring", 00:16:57.813 "raid_level": "raid1", 00:16:57.813 "superblock": true, 00:16:57.813 "num_base_bdevs": 2, 00:16:57.813 "num_base_bdevs_discovered": 1, 00:16:57.813 "num_base_bdevs_operational": 2, 00:16:57.813 "base_bdevs_list": [ 00:16:57.813 { 00:16:57.813 "name": "pt1", 00:16:57.813 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:57.813 "is_configured": true, 00:16:57.813 "data_offset": 256, 00:16:57.813 "data_size": 7936 00:16:57.813 }, 00:16:57.813 { 00:16:57.813 "name": null, 00:16:57.813 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:57.813 "is_configured": false, 00:16:57.813 "data_offset": 256, 00:16:57.813 "data_size": 7936 00:16:57.813 } 00:16:57.813 ] 00:16:57.813 }' 00:16:57.813 13:26:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:57.813 13:26:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:58.383 13:26:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:16:58.383 13:26:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:16:58.383 13:26:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:58.383 13:26:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:58.383 13:26:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.383 13:26:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:58.383 [2024-11-17 13:26:47.364597] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:58.383 [2024-11-17 13:26:47.364744] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:58.383 [2024-11-17 13:26:47.364779] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:16:58.383 [2024-11-17 13:26:47.364808] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:58.383 [2024-11-17 13:26:47.365284] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:58.383 [2024-11-17 13:26:47.365349] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:58.383 [2024-11-17 13:26:47.365472] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:58.383 [2024-11-17 13:26:47.365533] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:58.383 [2024-11-17 13:26:47.365685] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:58.383 [2024-11-17 13:26:47.365722] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:58.383 [2024-11-17 13:26:47.365973] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:16:58.383 [2024-11-17 13:26:47.366164] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:58.383 [2024-11-17 13:26:47.366204] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:16:58.383 [2024-11-17 13:26:47.366418] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:58.383 pt2 00:16:58.383 13:26:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.383 13:26:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:58.383 13:26:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:58.383 13:26:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:58.383 13:26:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:58.383 13:26:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:58.383 13:26:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:58.383 13:26:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:58.383 13:26:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:58.383 13:26:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:58.383 13:26:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:58.383 13:26:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:58.383 13:26:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:58.383 13:26:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:58.383 13:26:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:58.383 13:26:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.383 13:26:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:58.383 13:26:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.383 13:26:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:58.383 "name": "raid_bdev1", 00:16:58.383 "uuid": "86e5e0d4-0936-4313-9f5e-35bf16f3846a", 00:16:58.383 "strip_size_kb": 0, 00:16:58.383 "state": "online", 00:16:58.383 "raid_level": "raid1", 00:16:58.383 "superblock": true, 00:16:58.383 "num_base_bdevs": 2, 00:16:58.383 "num_base_bdevs_discovered": 2, 00:16:58.383 "num_base_bdevs_operational": 2, 00:16:58.383 "base_bdevs_list": [ 00:16:58.383 { 00:16:58.383 "name": "pt1", 00:16:58.383 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:58.383 "is_configured": true, 00:16:58.383 "data_offset": 256, 00:16:58.383 "data_size": 7936 00:16:58.383 }, 00:16:58.383 { 00:16:58.383 "name": "pt2", 00:16:58.383 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:58.383 "is_configured": true, 00:16:58.383 "data_offset": 256, 00:16:58.383 "data_size": 7936 00:16:58.383 } 00:16:58.383 ] 00:16:58.383 }' 00:16:58.383 13:26:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:58.383 13:26:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:58.642 13:26:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:16:58.642 13:26:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:58.642 13:26:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:58.642 13:26:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:58.642 13:26:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:16:58.642 13:26:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:58.642 13:26:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:58.643 13:26:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:58.643 13:26:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.643 13:26:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:58.643 [2024-11-17 13:26:47.796073] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:58.643 13:26:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.643 13:26:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:58.643 "name": "raid_bdev1", 00:16:58.643 "aliases": [ 00:16:58.643 "86e5e0d4-0936-4313-9f5e-35bf16f3846a" 00:16:58.643 ], 00:16:58.643 "product_name": "Raid Volume", 00:16:58.643 "block_size": 4096, 00:16:58.643 "num_blocks": 7936, 00:16:58.643 "uuid": "86e5e0d4-0936-4313-9f5e-35bf16f3846a", 00:16:58.643 "assigned_rate_limits": { 00:16:58.643 "rw_ios_per_sec": 0, 00:16:58.643 "rw_mbytes_per_sec": 0, 00:16:58.643 "r_mbytes_per_sec": 0, 00:16:58.643 "w_mbytes_per_sec": 0 00:16:58.643 }, 00:16:58.643 "claimed": false, 00:16:58.643 "zoned": false, 00:16:58.643 "supported_io_types": { 00:16:58.643 "read": true, 00:16:58.643 "write": true, 00:16:58.643 "unmap": false, 00:16:58.643 "flush": false, 00:16:58.643 "reset": true, 00:16:58.643 "nvme_admin": false, 00:16:58.643 "nvme_io": false, 00:16:58.643 "nvme_io_md": false, 00:16:58.643 "write_zeroes": true, 00:16:58.643 "zcopy": false, 00:16:58.643 "get_zone_info": false, 00:16:58.643 "zone_management": false, 00:16:58.643 "zone_append": false, 00:16:58.643 "compare": false, 00:16:58.643 "compare_and_write": false, 00:16:58.643 "abort": false, 00:16:58.643 "seek_hole": false, 00:16:58.643 "seek_data": false, 00:16:58.643 "copy": false, 00:16:58.643 "nvme_iov_md": false 00:16:58.643 }, 00:16:58.643 "memory_domains": [ 00:16:58.643 { 00:16:58.643 "dma_device_id": "system", 00:16:58.643 "dma_device_type": 1 00:16:58.643 }, 00:16:58.643 { 00:16:58.643 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:58.643 "dma_device_type": 2 00:16:58.643 }, 00:16:58.643 { 00:16:58.643 "dma_device_id": "system", 00:16:58.643 "dma_device_type": 1 00:16:58.643 }, 00:16:58.643 { 00:16:58.643 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:58.643 "dma_device_type": 2 00:16:58.643 } 00:16:58.643 ], 00:16:58.643 "driver_specific": { 00:16:58.643 "raid": { 00:16:58.643 "uuid": "86e5e0d4-0936-4313-9f5e-35bf16f3846a", 00:16:58.643 "strip_size_kb": 0, 00:16:58.643 "state": "online", 00:16:58.643 "raid_level": "raid1", 00:16:58.643 "superblock": true, 00:16:58.643 "num_base_bdevs": 2, 00:16:58.643 "num_base_bdevs_discovered": 2, 00:16:58.643 "num_base_bdevs_operational": 2, 00:16:58.643 "base_bdevs_list": [ 00:16:58.643 { 00:16:58.643 "name": "pt1", 00:16:58.643 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:58.643 "is_configured": true, 00:16:58.643 "data_offset": 256, 00:16:58.643 "data_size": 7936 00:16:58.643 }, 00:16:58.643 { 00:16:58.643 "name": "pt2", 00:16:58.643 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:58.643 "is_configured": true, 00:16:58.643 "data_offset": 256, 00:16:58.643 "data_size": 7936 00:16:58.643 } 00:16:58.643 ] 00:16:58.643 } 00:16:58.643 } 00:16:58.643 }' 00:16:58.643 13:26:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:58.903 13:26:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:58.903 pt2' 00:16:58.903 13:26:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:58.903 13:26:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:16:58.903 13:26:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:58.903 13:26:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:58.903 13:26:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.903 13:26:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:58.903 13:26:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:58.903 13:26:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.903 13:26:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:16:58.903 13:26:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:16:58.903 13:26:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:58.903 13:26:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:58.903 13:26:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.903 13:26:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:58.903 13:26:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:58.903 13:26:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.903 13:26:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:16:58.903 13:26:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:16:58.903 13:26:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:16:58.903 13:26:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:58.903 13:26:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.903 13:26:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:58.903 [2024-11-17 13:26:48.047609] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:58.903 13:26:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.903 13:26:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # '[' 86e5e0d4-0936-4313-9f5e-35bf16f3846a '!=' 86e5e0d4-0936-4313-9f5e-35bf16f3846a ']' 00:16:58.903 13:26:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:16:58.903 13:26:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:58.903 13:26:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:16:58.903 13:26:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:16:58.903 13:26:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.903 13:26:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:58.903 [2024-11-17 13:26:48.095357] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:16:58.903 13:26:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.903 13:26:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:58.903 13:26:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:58.903 13:26:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:58.903 13:26:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:58.903 13:26:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:58.903 13:26:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:58.903 13:26:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:58.903 13:26:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:58.903 13:26:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:58.903 13:26:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:58.903 13:26:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:58.903 13:26:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.903 13:26:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:58.903 13:26:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:58.903 13:26:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.163 13:26:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:59.163 "name": "raid_bdev1", 00:16:59.163 "uuid": "86e5e0d4-0936-4313-9f5e-35bf16f3846a", 00:16:59.163 "strip_size_kb": 0, 00:16:59.163 "state": "online", 00:16:59.163 "raid_level": "raid1", 00:16:59.163 "superblock": true, 00:16:59.163 "num_base_bdevs": 2, 00:16:59.163 "num_base_bdevs_discovered": 1, 00:16:59.163 "num_base_bdevs_operational": 1, 00:16:59.163 "base_bdevs_list": [ 00:16:59.163 { 00:16:59.163 "name": null, 00:16:59.163 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:59.163 "is_configured": false, 00:16:59.163 "data_offset": 0, 00:16:59.163 "data_size": 7936 00:16:59.163 }, 00:16:59.163 { 00:16:59.163 "name": "pt2", 00:16:59.163 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:59.163 "is_configured": true, 00:16:59.163 "data_offset": 256, 00:16:59.163 "data_size": 7936 00:16:59.163 } 00:16:59.163 ] 00:16:59.163 }' 00:16:59.164 13:26:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:59.164 13:26:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:59.424 13:26:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:59.424 13:26:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.424 13:26:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:59.424 [2024-11-17 13:26:48.570596] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:59.424 [2024-11-17 13:26:48.570678] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:59.424 [2024-11-17 13:26:48.570759] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:59.424 [2024-11-17 13:26:48.570818] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:59.424 [2024-11-17 13:26:48.570937] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:16:59.424 13:26:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.424 13:26:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:59.424 13:26:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:16:59.424 13:26:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.424 13:26:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:59.424 13:26:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.424 13:26:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:16:59.424 13:26:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:16:59.424 13:26:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:16:59.424 13:26:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:59.424 13:26:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:16:59.424 13:26:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.424 13:26:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:59.424 13:26:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.424 13:26:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:16:59.424 13:26:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:59.424 13:26:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:16:59.424 13:26:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:16:59.424 13:26:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@519 -- # i=1 00:16:59.424 13:26:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:59.424 13:26:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.424 13:26:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:59.424 [2024-11-17 13:26:48.642431] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:59.424 [2024-11-17 13:26:48.642558] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:59.424 [2024-11-17 13:26:48.642591] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:16:59.424 [2024-11-17 13:26:48.642620] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:59.424 [2024-11-17 13:26:48.644772] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:59.424 [2024-11-17 13:26:48.644855] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:59.424 [2024-11-17 13:26:48.644955] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:59.424 [2024-11-17 13:26:48.645032] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:59.424 [2024-11-17 13:26:48.645224] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:59.424 [2024-11-17 13:26:48.645266] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:59.424 [2024-11-17 13:26:48.645534] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:59.424 [2024-11-17 13:26:48.645725] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:59.424 [2024-11-17 13:26:48.645775] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:16:59.424 [2024-11-17 13:26:48.646004] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:59.424 pt2 00:16:59.424 13:26:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.424 13:26:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:59.424 13:26:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:59.684 13:26:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:59.684 13:26:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:59.684 13:26:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:59.684 13:26:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:59.684 13:26:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:59.684 13:26:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:59.684 13:26:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:59.684 13:26:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:59.684 13:26:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:59.684 13:26:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:59.684 13:26:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.684 13:26:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:59.684 13:26:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.684 13:26:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:59.684 "name": "raid_bdev1", 00:16:59.684 "uuid": "86e5e0d4-0936-4313-9f5e-35bf16f3846a", 00:16:59.684 "strip_size_kb": 0, 00:16:59.684 "state": "online", 00:16:59.684 "raid_level": "raid1", 00:16:59.684 "superblock": true, 00:16:59.684 "num_base_bdevs": 2, 00:16:59.684 "num_base_bdevs_discovered": 1, 00:16:59.684 "num_base_bdevs_operational": 1, 00:16:59.684 "base_bdevs_list": [ 00:16:59.684 { 00:16:59.684 "name": null, 00:16:59.684 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:59.684 "is_configured": false, 00:16:59.684 "data_offset": 256, 00:16:59.684 "data_size": 7936 00:16:59.684 }, 00:16:59.684 { 00:16:59.684 "name": "pt2", 00:16:59.684 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:59.684 "is_configured": true, 00:16:59.684 "data_offset": 256, 00:16:59.684 "data_size": 7936 00:16:59.684 } 00:16:59.684 ] 00:16:59.684 }' 00:16:59.684 13:26:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:59.684 13:26:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:59.944 13:26:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:59.944 13:26:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.944 13:26:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:59.944 [2024-11-17 13:26:49.085659] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:59.944 [2024-11-17 13:26:49.085689] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:59.944 [2024-11-17 13:26:49.085745] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:59.944 [2024-11-17 13:26:49.085786] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:59.944 [2024-11-17 13:26:49.085795] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:16:59.944 13:26:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.944 13:26:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:59.944 13:26:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.944 13:26:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:16:59.944 13:26:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:59.944 13:26:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.944 13:26:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:16:59.945 13:26:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:16:59.945 13:26:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:16:59.945 13:26:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:59.945 13:26:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.945 13:26:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:59.945 [2024-11-17 13:26:49.149564] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:59.945 [2024-11-17 13:26:49.149654] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:59.945 [2024-11-17 13:26:49.149685] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:16:59.945 [2024-11-17 13:26:49.149710] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:59.945 [2024-11-17 13:26:49.151841] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:59.945 [2024-11-17 13:26:49.151907] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:59.945 [2024-11-17 13:26:49.151996] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:59.945 [2024-11-17 13:26:49.152077] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:59.945 [2024-11-17 13:26:49.152263] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:16:59.945 [2024-11-17 13:26:49.152305] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:59.945 [2024-11-17 13:26:49.152323] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:16:59.945 [2024-11-17 13:26:49.152396] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:59.945 [2024-11-17 13:26:49.152477] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:16:59.945 [2024-11-17 13:26:49.152485] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:59.945 [2024-11-17 13:26:49.152712] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:59.945 [2024-11-17 13:26:49.152840] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:16:59.945 [2024-11-17 13:26:49.152851] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:16:59.945 [2024-11-17 13:26:49.152987] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:59.945 pt1 00:16:59.945 13:26:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.945 13:26:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:16:59.945 13:26:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:59.945 13:26:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:59.945 13:26:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:59.945 13:26:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:59.945 13:26:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:59.945 13:26:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:59.945 13:26:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:59.945 13:26:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:59.945 13:26:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:59.945 13:26:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:59.945 13:26:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:59.945 13:26:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.945 13:26:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:59.945 13:26:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:00.204 13:26:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.204 13:26:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:00.204 "name": "raid_bdev1", 00:17:00.204 "uuid": "86e5e0d4-0936-4313-9f5e-35bf16f3846a", 00:17:00.204 "strip_size_kb": 0, 00:17:00.204 "state": "online", 00:17:00.204 "raid_level": "raid1", 00:17:00.204 "superblock": true, 00:17:00.204 "num_base_bdevs": 2, 00:17:00.204 "num_base_bdevs_discovered": 1, 00:17:00.204 "num_base_bdevs_operational": 1, 00:17:00.204 "base_bdevs_list": [ 00:17:00.204 { 00:17:00.204 "name": null, 00:17:00.204 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:00.204 "is_configured": false, 00:17:00.204 "data_offset": 256, 00:17:00.204 "data_size": 7936 00:17:00.204 }, 00:17:00.204 { 00:17:00.204 "name": "pt2", 00:17:00.204 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:00.204 "is_configured": true, 00:17:00.204 "data_offset": 256, 00:17:00.204 "data_size": 7936 00:17:00.204 } 00:17:00.204 ] 00:17:00.204 }' 00:17:00.204 13:26:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:00.204 13:26:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:00.464 13:26:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:17:00.464 13:26:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:17:00.464 13:26:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.464 13:26:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:00.464 13:26:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.464 13:26:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:17:00.464 13:26:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:00.464 13:26:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.464 13:26:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:00.464 13:26:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:17:00.464 [2024-11-17 13:26:49.620938] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:00.464 13:26:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.464 13:26:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # '[' 86e5e0d4-0936-4313-9f5e-35bf16f3846a '!=' 86e5e0d4-0936-4313-9f5e-35bf16f3846a ']' 00:17:00.464 13:26:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@563 -- # killprocess 86038 00:17:00.464 13:26:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@954 -- # '[' -z 86038 ']' 00:17:00.464 13:26:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@958 -- # kill -0 86038 00:17:00.464 13:26:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@959 -- # uname 00:17:00.464 13:26:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:00.464 13:26:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86038 00:17:00.724 killing process with pid 86038 00:17:00.724 13:26:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:00.724 13:26:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:00.724 13:26:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86038' 00:17:00.724 13:26:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@973 -- # kill 86038 00:17:00.724 [2024-11-17 13:26:49.710631] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:00.724 [2024-11-17 13:26:49.710709] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:00.724 [2024-11-17 13:26:49.710751] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:00.724 [2024-11-17 13:26:49.710765] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:17:00.724 13:26:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@978 -- # wait 86038 00:17:00.724 [2024-11-17 13:26:49.911253] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:02.107 13:26:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@565 -- # return 0 00:17:02.107 00:17:02.107 real 0m5.978s 00:17:02.107 user 0m9.029s 00:17:02.107 sys 0m1.110s 00:17:02.107 13:26:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:02.107 ************************************ 00:17:02.107 END TEST raid_superblock_test_4k 00:17:02.107 ************************************ 00:17:02.107 13:26:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:02.107 13:26:51 bdev_raid -- bdev/bdev_raid.sh@999 -- # '[' true = true ']' 00:17:02.107 13:26:51 bdev_raid -- bdev/bdev_raid.sh@1000 -- # run_test raid_rebuild_test_sb_4k raid_rebuild_test raid1 2 true false true 00:17:02.107 13:26:51 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:17:02.107 13:26:51 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:02.107 13:26:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:02.107 ************************************ 00:17:02.107 START TEST raid_rebuild_test_sb_4k 00:17:02.107 ************************************ 00:17:02.107 13:26:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:17:02.107 13:26:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:17:02.107 13:26:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:17:02.107 13:26:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:17:02.107 13:26:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:17:02.107 13:26:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # local verify=true 00:17:02.107 13:26:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:17:02.107 13:26:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:02.107 13:26:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:17:02.107 13:26:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:02.107 13:26:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:02.107 13:26:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:17:02.107 13:26:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:02.107 13:26:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:02.107 13:26:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:02.107 13:26:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:17:02.107 13:26:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:17:02.107 13:26:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # local strip_size 00:17:02.107 13:26:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@577 -- # local create_arg 00:17:02.107 13:26:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:17:02.107 13:26:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@579 -- # local data_offset 00:17:02.107 13:26:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:17:02.107 13:26:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:17:02.107 13:26:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:17:02.107 13:26:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:17:02.107 13:26:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@597 -- # raid_pid=86365 00:17:02.107 13:26:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:02.107 13:26:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@598 -- # waitforlisten 86365 00:17:02.107 13:26:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@835 -- # '[' -z 86365 ']' 00:17:02.107 13:26:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:02.107 13:26:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:02.107 13:26:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:02.107 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:02.107 13:26:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:02.107 13:26:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:02.107 [2024-11-17 13:26:51.139265] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:17:02.107 [2024-11-17 13:26:51.139483] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86365 ] 00:17:02.107 I/O size of 3145728 is greater than zero copy threshold (65536). 00:17:02.107 Zero copy mechanism will not be used. 00:17:02.107 [2024-11-17 13:26:51.305940] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:02.368 [2024-11-17 13:26:51.417080] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:02.627 [2024-11-17 13:26:51.613906] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:02.627 [2024-11-17 13:26:51.614043] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:02.886 13:26:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:02.886 13:26:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # return 0 00:17:02.886 13:26:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:02.886 13:26:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1_malloc 00:17:02.886 13:26:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.886 13:26:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:02.886 BaseBdev1_malloc 00:17:02.886 13:26:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.886 13:26:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:02.886 13:26:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.886 13:26:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:02.886 [2024-11-17 13:26:52.017739] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:02.886 [2024-11-17 13:26:52.017882] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:02.886 [2024-11-17 13:26:52.017923] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:02.886 [2024-11-17 13:26:52.017951] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:02.886 [2024-11-17 13:26:52.019976] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:02.886 [2024-11-17 13:26:52.020046] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:02.886 BaseBdev1 00:17:02.886 13:26:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.886 13:26:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:02.886 13:26:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2_malloc 00:17:02.886 13:26:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.886 13:26:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:02.886 BaseBdev2_malloc 00:17:02.886 13:26:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.886 13:26:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:02.886 13:26:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.886 13:26:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:02.886 [2024-11-17 13:26:52.071682] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:02.886 [2024-11-17 13:26:52.071794] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:02.886 [2024-11-17 13:26:52.071828] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:02.886 [2024-11-17 13:26:52.071856] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:02.886 [2024-11-17 13:26:52.073717] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:02.886 [2024-11-17 13:26:52.073787] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:02.886 BaseBdev2 00:17:02.886 13:26:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.886 13:26:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -b spare_malloc 00:17:02.886 13:26:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.886 13:26:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:03.145 spare_malloc 00:17:03.145 13:26:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.145 13:26:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:03.145 13:26:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.145 13:26:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:03.145 spare_delay 00:17:03.145 13:26:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.145 13:26:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:03.145 13:26:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.145 13:26:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:03.145 [2024-11-17 13:26:52.148867] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:03.145 [2024-11-17 13:26:52.148920] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:03.145 [2024-11-17 13:26:52.148938] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:17:03.145 [2024-11-17 13:26:52.148948] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:03.145 [2024-11-17 13:26:52.150880] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:03.145 [2024-11-17 13:26:52.150968] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:03.145 spare 00:17:03.145 13:26:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.145 13:26:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:17:03.145 13:26:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.145 13:26:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:03.145 [2024-11-17 13:26:52.160904] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:03.145 [2024-11-17 13:26:52.162570] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:03.145 [2024-11-17 13:26:52.162737] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:03.145 [2024-11-17 13:26:52.162752] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:03.145 [2024-11-17 13:26:52.162971] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:17:03.145 [2024-11-17 13:26:52.163119] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:03.145 [2024-11-17 13:26:52.163127] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:03.145 [2024-11-17 13:26:52.163266] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:03.145 13:26:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.145 13:26:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:03.145 13:26:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:03.145 13:26:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:03.145 13:26:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:03.145 13:26:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:03.145 13:26:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:03.145 13:26:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:03.145 13:26:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:03.145 13:26:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:03.146 13:26:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:03.146 13:26:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:03.146 13:26:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:03.146 13:26:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.146 13:26:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:03.146 13:26:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.146 13:26:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:03.146 "name": "raid_bdev1", 00:17:03.146 "uuid": "1e208572-47fa-4f0d-8ee3-b8ea70b0e44d", 00:17:03.146 "strip_size_kb": 0, 00:17:03.146 "state": "online", 00:17:03.146 "raid_level": "raid1", 00:17:03.146 "superblock": true, 00:17:03.146 "num_base_bdevs": 2, 00:17:03.146 "num_base_bdevs_discovered": 2, 00:17:03.146 "num_base_bdevs_operational": 2, 00:17:03.146 "base_bdevs_list": [ 00:17:03.146 { 00:17:03.146 "name": "BaseBdev1", 00:17:03.146 "uuid": "0aa4f079-8207-58df-8c3a-751edb49cfd3", 00:17:03.146 "is_configured": true, 00:17:03.146 "data_offset": 256, 00:17:03.146 "data_size": 7936 00:17:03.146 }, 00:17:03.146 { 00:17:03.146 "name": "BaseBdev2", 00:17:03.146 "uuid": "c73f92f8-e62c-5f5a-94ce-a01c06c737e3", 00:17:03.146 "is_configured": true, 00:17:03.146 "data_offset": 256, 00:17:03.146 "data_size": 7936 00:17:03.146 } 00:17:03.146 ] 00:17:03.146 }' 00:17:03.146 13:26:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:03.146 13:26:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:03.405 13:26:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:03.405 13:26:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.405 13:26:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:03.405 13:26:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:17:03.405 [2024-11-17 13:26:52.568421] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:03.405 13:26:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.405 13:26:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:17:03.405 13:26:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:03.405 13:26:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:03.405 13:26:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.405 13:26:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:03.405 13:26:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.666 13:26:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:17:03.666 13:26:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:17:03.666 13:26:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:17:03.666 13:26:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:17:03.666 13:26:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:17:03.666 13:26:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:03.666 13:26:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:17:03.666 13:26:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:03.666 13:26:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:17:03.666 13:26:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:03.666 13:26:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:17:03.666 13:26:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:03.666 13:26:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:03.666 13:26:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:17:03.666 [2024-11-17 13:26:52.839765] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:03.666 /dev/nbd0 00:17:03.666 13:26:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:03.666 13:26:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:03.666 13:26:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:03.666 13:26:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:17:03.666 13:26:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:03.666 13:26:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:03.666 13:26:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:03.666 13:26:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:17:03.666 13:26:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:03.666 13:26:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:03.666 13:26:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:03.926 1+0 records in 00:17:03.926 1+0 records out 00:17:03.926 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000393522 s, 10.4 MB/s 00:17:03.926 13:26:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:03.926 13:26:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:17:03.926 13:26:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:03.926 13:26:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:03.926 13:26:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:17:03.926 13:26:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:03.926 13:26:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:03.926 13:26:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:17:03.926 13:26:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:17:03.926 13:26:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:17:04.497 7936+0 records in 00:17:04.497 7936+0 records out 00:17:04.497 32505856 bytes (33 MB, 31 MiB) copied, 0.64307 s, 50.5 MB/s 00:17:04.497 13:26:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:17:04.497 13:26:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:04.497 13:26:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:04.497 13:26:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:04.497 13:26:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:17:04.497 13:26:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:04.497 13:26:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:04.757 13:26:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:04.757 [2024-11-17 13:26:53.775466] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:04.757 13:26:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:04.757 13:26:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:04.757 13:26:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:04.757 13:26:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:04.757 13:26:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:04.757 13:26:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:17:04.757 13:26:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:17:04.757 13:26:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:17:04.757 13:26:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.757 13:26:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:04.757 [2024-11-17 13:26:53.790240] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:04.757 13:26:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.757 13:26:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:04.757 13:26:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:04.757 13:26:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:04.757 13:26:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:04.757 13:26:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:04.757 13:26:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:04.757 13:26:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:04.757 13:26:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:04.757 13:26:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:04.757 13:26:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:04.757 13:26:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:04.757 13:26:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:04.757 13:26:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.757 13:26:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:04.757 13:26:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.757 13:26:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:04.757 "name": "raid_bdev1", 00:17:04.757 "uuid": "1e208572-47fa-4f0d-8ee3-b8ea70b0e44d", 00:17:04.757 "strip_size_kb": 0, 00:17:04.757 "state": "online", 00:17:04.757 "raid_level": "raid1", 00:17:04.757 "superblock": true, 00:17:04.757 "num_base_bdevs": 2, 00:17:04.757 "num_base_bdevs_discovered": 1, 00:17:04.757 "num_base_bdevs_operational": 1, 00:17:04.757 "base_bdevs_list": [ 00:17:04.757 { 00:17:04.757 "name": null, 00:17:04.757 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:04.757 "is_configured": false, 00:17:04.757 "data_offset": 0, 00:17:04.757 "data_size": 7936 00:17:04.757 }, 00:17:04.757 { 00:17:04.757 "name": "BaseBdev2", 00:17:04.757 "uuid": "c73f92f8-e62c-5f5a-94ce-a01c06c737e3", 00:17:04.757 "is_configured": true, 00:17:04.757 "data_offset": 256, 00:17:04.757 "data_size": 7936 00:17:04.757 } 00:17:04.757 ] 00:17:04.757 }' 00:17:04.757 13:26:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:04.757 13:26:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:05.328 13:26:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:05.328 13:26:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.328 13:26:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:05.328 [2024-11-17 13:26:54.257439] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:05.328 [2024-11-17 13:26:54.274152] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:17:05.328 13:26:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.328 13:26:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:05.328 [2024-11-17 13:26:54.276032] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:06.267 13:26:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:06.267 13:26:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:06.267 13:26:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:06.267 13:26:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:06.267 13:26:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:06.267 13:26:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:06.267 13:26:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.267 13:26:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:06.267 13:26:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:06.267 13:26:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.267 13:26:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:06.267 "name": "raid_bdev1", 00:17:06.267 "uuid": "1e208572-47fa-4f0d-8ee3-b8ea70b0e44d", 00:17:06.267 "strip_size_kb": 0, 00:17:06.267 "state": "online", 00:17:06.267 "raid_level": "raid1", 00:17:06.267 "superblock": true, 00:17:06.267 "num_base_bdevs": 2, 00:17:06.267 "num_base_bdevs_discovered": 2, 00:17:06.267 "num_base_bdevs_operational": 2, 00:17:06.267 "process": { 00:17:06.267 "type": "rebuild", 00:17:06.267 "target": "spare", 00:17:06.267 "progress": { 00:17:06.267 "blocks": 2560, 00:17:06.267 "percent": 32 00:17:06.267 } 00:17:06.267 }, 00:17:06.267 "base_bdevs_list": [ 00:17:06.267 { 00:17:06.267 "name": "spare", 00:17:06.267 "uuid": "9d769ae4-946c-5e94-8310-1aa77cf5c6ae", 00:17:06.267 "is_configured": true, 00:17:06.267 "data_offset": 256, 00:17:06.267 "data_size": 7936 00:17:06.267 }, 00:17:06.267 { 00:17:06.267 "name": "BaseBdev2", 00:17:06.267 "uuid": "c73f92f8-e62c-5f5a-94ce-a01c06c737e3", 00:17:06.267 "is_configured": true, 00:17:06.267 "data_offset": 256, 00:17:06.267 "data_size": 7936 00:17:06.267 } 00:17:06.267 ] 00:17:06.267 }' 00:17:06.267 13:26:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:06.267 13:26:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:06.267 13:26:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:06.267 13:26:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:06.267 13:26:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:06.267 13:26:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.267 13:26:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:06.267 [2024-11-17 13:26:55.443336] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:06.267 [2024-11-17 13:26:55.480755] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:06.267 [2024-11-17 13:26:55.480813] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:06.267 [2024-11-17 13:26:55.480826] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:06.267 [2024-11-17 13:26:55.480835] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:06.527 13:26:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.527 13:26:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:06.527 13:26:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:06.527 13:26:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:06.527 13:26:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:06.527 13:26:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:06.527 13:26:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:06.527 13:26:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:06.527 13:26:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:06.527 13:26:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:06.527 13:26:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:06.527 13:26:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:06.527 13:26:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:06.527 13:26:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.527 13:26:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:06.527 13:26:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.527 13:26:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:06.527 "name": "raid_bdev1", 00:17:06.527 "uuid": "1e208572-47fa-4f0d-8ee3-b8ea70b0e44d", 00:17:06.527 "strip_size_kb": 0, 00:17:06.527 "state": "online", 00:17:06.528 "raid_level": "raid1", 00:17:06.528 "superblock": true, 00:17:06.528 "num_base_bdevs": 2, 00:17:06.528 "num_base_bdevs_discovered": 1, 00:17:06.528 "num_base_bdevs_operational": 1, 00:17:06.528 "base_bdevs_list": [ 00:17:06.528 { 00:17:06.528 "name": null, 00:17:06.528 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:06.528 "is_configured": false, 00:17:06.528 "data_offset": 0, 00:17:06.528 "data_size": 7936 00:17:06.528 }, 00:17:06.528 { 00:17:06.528 "name": "BaseBdev2", 00:17:06.528 "uuid": "c73f92f8-e62c-5f5a-94ce-a01c06c737e3", 00:17:06.528 "is_configured": true, 00:17:06.528 "data_offset": 256, 00:17:06.528 "data_size": 7936 00:17:06.528 } 00:17:06.528 ] 00:17:06.528 }' 00:17:06.528 13:26:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:06.528 13:26:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:06.795 13:26:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:06.795 13:26:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:06.795 13:26:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:06.795 13:26:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:06.795 13:26:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:06.795 13:26:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:06.795 13:26:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:06.795 13:26:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.795 13:26:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:06.795 13:26:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.055 13:26:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:07.055 "name": "raid_bdev1", 00:17:07.055 "uuid": "1e208572-47fa-4f0d-8ee3-b8ea70b0e44d", 00:17:07.055 "strip_size_kb": 0, 00:17:07.055 "state": "online", 00:17:07.055 "raid_level": "raid1", 00:17:07.055 "superblock": true, 00:17:07.055 "num_base_bdevs": 2, 00:17:07.055 "num_base_bdevs_discovered": 1, 00:17:07.055 "num_base_bdevs_operational": 1, 00:17:07.055 "base_bdevs_list": [ 00:17:07.055 { 00:17:07.055 "name": null, 00:17:07.055 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:07.055 "is_configured": false, 00:17:07.055 "data_offset": 0, 00:17:07.055 "data_size": 7936 00:17:07.055 }, 00:17:07.055 { 00:17:07.055 "name": "BaseBdev2", 00:17:07.055 "uuid": "c73f92f8-e62c-5f5a-94ce-a01c06c737e3", 00:17:07.055 "is_configured": true, 00:17:07.055 "data_offset": 256, 00:17:07.055 "data_size": 7936 00:17:07.055 } 00:17:07.055 ] 00:17:07.055 }' 00:17:07.055 13:26:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:07.055 13:26:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:07.055 13:26:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:07.055 13:26:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:07.056 13:26:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:07.056 13:26:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.056 13:26:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:07.056 [2024-11-17 13:26:56.126029] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:07.056 [2024-11-17 13:26:56.143266] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:17:07.056 13:26:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.056 13:26:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@663 -- # sleep 1 00:17:07.056 [2024-11-17 13:26:56.145178] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:07.996 13:26:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:07.996 13:26:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:07.996 13:26:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:07.996 13:26:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:07.996 13:26:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:07.996 13:26:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:07.996 13:26:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:07.996 13:26:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.996 13:26:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:07.996 13:26:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.996 13:26:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:07.996 "name": "raid_bdev1", 00:17:07.996 "uuid": "1e208572-47fa-4f0d-8ee3-b8ea70b0e44d", 00:17:07.996 "strip_size_kb": 0, 00:17:07.996 "state": "online", 00:17:07.996 "raid_level": "raid1", 00:17:07.996 "superblock": true, 00:17:07.996 "num_base_bdevs": 2, 00:17:07.996 "num_base_bdevs_discovered": 2, 00:17:07.996 "num_base_bdevs_operational": 2, 00:17:07.996 "process": { 00:17:07.996 "type": "rebuild", 00:17:07.996 "target": "spare", 00:17:07.996 "progress": { 00:17:07.996 "blocks": 2560, 00:17:07.996 "percent": 32 00:17:07.996 } 00:17:07.996 }, 00:17:07.996 "base_bdevs_list": [ 00:17:07.996 { 00:17:07.996 "name": "spare", 00:17:07.996 "uuid": "9d769ae4-946c-5e94-8310-1aa77cf5c6ae", 00:17:07.996 "is_configured": true, 00:17:07.996 "data_offset": 256, 00:17:07.996 "data_size": 7936 00:17:07.996 }, 00:17:07.996 { 00:17:07.996 "name": "BaseBdev2", 00:17:07.996 "uuid": "c73f92f8-e62c-5f5a-94ce-a01c06c737e3", 00:17:07.996 "is_configured": true, 00:17:07.996 "data_offset": 256, 00:17:07.996 "data_size": 7936 00:17:07.996 } 00:17:07.996 ] 00:17:07.996 }' 00:17:07.996 13:26:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:08.267 13:26:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:08.267 13:26:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:08.267 13:26:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:08.267 13:26:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:17:08.267 13:26:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:17:08.267 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:17:08.267 13:26:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:17:08.267 13:26:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:17:08.267 13:26:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:17:08.267 13:26:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # local timeout=667 00:17:08.267 13:26:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:08.267 13:26:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:08.267 13:26:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:08.267 13:26:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:08.267 13:26:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:08.267 13:26:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:08.267 13:26:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:08.267 13:26:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:08.267 13:26:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.267 13:26:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:08.267 13:26:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.267 13:26:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:08.267 "name": "raid_bdev1", 00:17:08.267 "uuid": "1e208572-47fa-4f0d-8ee3-b8ea70b0e44d", 00:17:08.267 "strip_size_kb": 0, 00:17:08.267 "state": "online", 00:17:08.267 "raid_level": "raid1", 00:17:08.267 "superblock": true, 00:17:08.267 "num_base_bdevs": 2, 00:17:08.267 "num_base_bdevs_discovered": 2, 00:17:08.267 "num_base_bdevs_operational": 2, 00:17:08.267 "process": { 00:17:08.267 "type": "rebuild", 00:17:08.267 "target": "spare", 00:17:08.267 "progress": { 00:17:08.267 "blocks": 2816, 00:17:08.267 "percent": 35 00:17:08.267 } 00:17:08.267 }, 00:17:08.267 "base_bdevs_list": [ 00:17:08.267 { 00:17:08.267 "name": "spare", 00:17:08.267 "uuid": "9d769ae4-946c-5e94-8310-1aa77cf5c6ae", 00:17:08.267 "is_configured": true, 00:17:08.267 "data_offset": 256, 00:17:08.267 "data_size": 7936 00:17:08.267 }, 00:17:08.267 { 00:17:08.267 "name": "BaseBdev2", 00:17:08.267 "uuid": "c73f92f8-e62c-5f5a-94ce-a01c06c737e3", 00:17:08.267 "is_configured": true, 00:17:08.267 "data_offset": 256, 00:17:08.267 "data_size": 7936 00:17:08.267 } 00:17:08.267 ] 00:17:08.267 }' 00:17:08.267 13:26:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:08.267 13:26:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:08.267 13:26:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:08.267 13:26:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:08.267 13:26:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:09.220 13:26:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:09.220 13:26:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:09.220 13:26:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:09.220 13:26:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:09.220 13:26:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:09.220 13:26:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:09.220 13:26:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:09.220 13:26:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.220 13:26:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:09.220 13:26:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:09.220 13:26:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.481 13:26:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:09.481 "name": "raid_bdev1", 00:17:09.481 "uuid": "1e208572-47fa-4f0d-8ee3-b8ea70b0e44d", 00:17:09.481 "strip_size_kb": 0, 00:17:09.481 "state": "online", 00:17:09.481 "raid_level": "raid1", 00:17:09.481 "superblock": true, 00:17:09.481 "num_base_bdevs": 2, 00:17:09.481 "num_base_bdevs_discovered": 2, 00:17:09.481 "num_base_bdevs_operational": 2, 00:17:09.481 "process": { 00:17:09.481 "type": "rebuild", 00:17:09.481 "target": "spare", 00:17:09.481 "progress": { 00:17:09.481 "blocks": 5632, 00:17:09.481 "percent": 70 00:17:09.481 } 00:17:09.481 }, 00:17:09.481 "base_bdevs_list": [ 00:17:09.481 { 00:17:09.481 "name": "spare", 00:17:09.481 "uuid": "9d769ae4-946c-5e94-8310-1aa77cf5c6ae", 00:17:09.481 "is_configured": true, 00:17:09.481 "data_offset": 256, 00:17:09.481 "data_size": 7936 00:17:09.481 }, 00:17:09.481 { 00:17:09.481 "name": "BaseBdev2", 00:17:09.481 "uuid": "c73f92f8-e62c-5f5a-94ce-a01c06c737e3", 00:17:09.481 "is_configured": true, 00:17:09.481 "data_offset": 256, 00:17:09.481 "data_size": 7936 00:17:09.481 } 00:17:09.481 ] 00:17:09.481 }' 00:17:09.481 13:26:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:09.481 13:26:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:09.481 13:26:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:09.481 13:26:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:09.481 13:26:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:10.051 [2024-11-17 13:26:59.257048] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:10.051 [2024-11-17 13:26:59.257112] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:10.051 [2024-11-17 13:26:59.257200] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:10.619 13:26:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:10.619 13:26:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:10.619 13:26:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:10.619 13:26:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:10.619 13:26:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:10.619 13:26:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:10.619 13:26:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:10.619 13:26:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:10.619 13:26:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.619 13:26:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:10.619 13:26:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.619 13:26:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:10.619 "name": "raid_bdev1", 00:17:10.619 "uuid": "1e208572-47fa-4f0d-8ee3-b8ea70b0e44d", 00:17:10.619 "strip_size_kb": 0, 00:17:10.619 "state": "online", 00:17:10.619 "raid_level": "raid1", 00:17:10.619 "superblock": true, 00:17:10.619 "num_base_bdevs": 2, 00:17:10.619 "num_base_bdevs_discovered": 2, 00:17:10.619 "num_base_bdevs_operational": 2, 00:17:10.619 "base_bdevs_list": [ 00:17:10.619 { 00:17:10.619 "name": "spare", 00:17:10.619 "uuid": "9d769ae4-946c-5e94-8310-1aa77cf5c6ae", 00:17:10.619 "is_configured": true, 00:17:10.619 "data_offset": 256, 00:17:10.619 "data_size": 7936 00:17:10.619 }, 00:17:10.619 { 00:17:10.619 "name": "BaseBdev2", 00:17:10.619 "uuid": "c73f92f8-e62c-5f5a-94ce-a01c06c737e3", 00:17:10.619 "is_configured": true, 00:17:10.619 "data_offset": 256, 00:17:10.619 "data_size": 7936 00:17:10.619 } 00:17:10.619 ] 00:17:10.619 }' 00:17:10.619 13:26:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:10.619 13:26:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:10.619 13:26:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:10.619 13:26:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:10.619 13:26:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@709 -- # break 00:17:10.619 13:26:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:10.619 13:26:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:10.619 13:26:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:10.619 13:26:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:10.620 13:26:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:10.620 13:26:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:10.620 13:26:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.620 13:26:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:10.620 13:26:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:10.620 13:26:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.620 13:26:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:10.620 "name": "raid_bdev1", 00:17:10.620 "uuid": "1e208572-47fa-4f0d-8ee3-b8ea70b0e44d", 00:17:10.620 "strip_size_kb": 0, 00:17:10.620 "state": "online", 00:17:10.620 "raid_level": "raid1", 00:17:10.620 "superblock": true, 00:17:10.620 "num_base_bdevs": 2, 00:17:10.620 "num_base_bdevs_discovered": 2, 00:17:10.620 "num_base_bdevs_operational": 2, 00:17:10.620 "base_bdevs_list": [ 00:17:10.620 { 00:17:10.620 "name": "spare", 00:17:10.620 "uuid": "9d769ae4-946c-5e94-8310-1aa77cf5c6ae", 00:17:10.620 "is_configured": true, 00:17:10.620 "data_offset": 256, 00:17:10.620 "data_size": 7936 00:17:10.620 }, 00:17:10.620 { 00:17:10.620 "name": "BaseBdev2", 00:17:10.620 "uuid": "c73f92f8-e62c-5f5a-94ce-a01c06c737e3", 00:17:10.620 "is_configured": true, 00:17:10.620 "data_offset": 256, 00:17:10.620 "data_size": 7936 00:17:10.620 } 00:17:10.620 ] 00:17:10.620 }' 00:17:10.620 13:26:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:10.620 13:26:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:10.620 13:26:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:10.620 13:26:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:10.620 13:26:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:10.620 13:26:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:10.620 13:26:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:10.620 13:26:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:10.620 13:26:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:10.620 13:26:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:10.620 13:26:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:10.620 13:26:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:10.620 13:26:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:10.620 13:26:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:10.620 13:26:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:10.620 13:26:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:10.620 13:26:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.620 13:26:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:10.620 13:26:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.879 13:26:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:10.879 "name": "raid_bdev1", 00:17:10.879 "uuid": "1e208572-47fa-4f0d-8ee3-b8ea70b0e44d", 00:17:10.879 "strip_size_kb": 0, 00:17:10.879 "state": "online", 00:17:10.879 "raid_level": "raid1", 00:17:10.879 "superblock": true, 00:17:10.879 "num_base_bdevs": 2, 00:17:10.879 "num_base_bdevs_discovered": 2, 00:17:10.879 "num_base_bdevs_operational": 2, 00:17:10.879 "base_bdevs_list": [ 00:17:10.880 { 00:17:10.880 "name": "spare", 00:17:10.880 "uuid": "9d769ae4-946c-5e94-8310-1aa77cf5c6ae", 00:17:10.880 "is_configured": true, 00:17:10.880 "data_offset": 256, 00:17:10.880 "data_size": 7936 00:17:10.880 }, 00:17:10.880 { 00:17:10.880 "name": "BaseBdev2", 00:17:10.880 "uuid": "c73f92f8-e62c-5f5a-94ce-a01c06c737e3", 00:17:10.880 "is_configured": true, 00:17:10.880 "data_offset": 256, 00:17:10.880 "data_size": 7936 00:17:10.880 } 00:17:10.880 ] 00:17:10.880 }' 00:17:10.880 13:26:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:10.880 13:26:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:11.140 13:27:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:11.140 13:27:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.140 13:27:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:11.140 [2024-11-17 13:27:00.252965] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:11.140 [2024-11-17 13:27:00.252998] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:11.140 [2024-11-17 13:27:00.253077] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:11.140 [2024-11-17 13:27:00.253140] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:11.140 [2024-11-17 13:27:00.253149] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:11.140 13:27:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.140 13:27:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:11.140 13:27:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.140 13:27:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # jq length 00:17:11.140 13:27:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:11.140 13:27:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.140 13:27:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:11.140 13:27:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:17:11.140 13:27:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:17:11.140 13:27:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:17:11.140 13:27:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:11.140 13:27:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:17:11.140 13:27:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:11.140 13:27:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:11.140 13:27:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:11.140 13:27:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:17:11.140 13:27:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:11.140 13:27:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:11.140 13:27:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:17:11.400 /dev/nbd0 00:17:11.400 13:27:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:11.400 13:27:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:11.400 13:27:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:11.400 13:27:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:17:11.400 13:27:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:11.400 13:27:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:11.400 13:27:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:11.400 13:27:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:17:11.400 13:27:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:11.400 13:27:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:11.400 13:27:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:11.400 1+0 records in 00:17:11.400 1+0 records out 00:17:11.400 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000333031 s, 12.3 MB/s 00:17:11.401 13:27:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:11.401 13:27:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:17:11.401 13:27:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:11.401 13:27:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:11.401 13:27:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:17:11.401 13:27:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:11.401 13:27:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:11.401 13:27:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:17:11.661 /dev/nbd1 00:17:11.661 13:27:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:11.661 13:27:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:11.661 13:27:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:17:11.661 13:27:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:17:11.661 13:27:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:11.661 13:27:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:11.661 13:27:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:17:11.661 13:27:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:17:11.661 13:27:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:11.661 13:27:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:11.661 13:27:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:11.661 1+0 records in 00:17:11.661 1+0 records out 00:17:11.661 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000321725 s, 12.7 MB/s 00:17:11.661 13:27:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:11.661 13:27:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:17:11.661 13:27:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:11.661 13:27:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:11.661 13:27:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:17:11.661 13:27:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:11.661 13:27:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:11.661 13:27:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:17:11.921 13:27:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:17:11.921 13:27:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:11.921 13:27:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:11.921 13:27:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:11.921 13:27:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:17:11.921 13:27:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:11.921 13:27:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:12.181 13:27:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:12.181 13:27:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:12.181 13:27:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:12.181 13:27:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:12.181 13:27:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:12.181 13:27:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:12.181 13:27:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:17:12.181 13:27:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:17:12.181 13:27:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:12.181 13:27:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:12.442 13:27:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:12.442 13:27:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:12.442 13:27:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:12.442 13:27:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:12.442 13:27:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:12.442 13:27:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:12.442 13:27:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:17:12.442 13:27:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:17:12.442 13:27:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:17:12.442 13:27:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:17:12.442 13:27:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.442 13:27:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:12.442 13:27:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.442 13:27:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:12.442 13:27:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.442 13:27:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:12.442 [2024-11-17 13:27:01.447468] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:12.442 [2024-11-17 13:27:01.447526] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:12.442 [2024-11-17 13:27:01.447552] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:17:12.442 [2024-11-17 13:27:01.447561] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:12.442 [2024-11-17 13:27:01.449576] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:12.442 [2024-11-17 13:27:01.449612] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:12.442 [2024-11-17 13:27:01.449717] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:12.442 [2024-11-17 13:27:01.449766] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:12.442 [2024-11-17 13:27:01.449927] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:12.442 spare 00:17:12.442 13:27:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.442 13:27:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:17:12.442 13:27:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.442 13:27:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:12.442 [2024-11-17 13:27:01.549843] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:17:12.442 [2024-11-17 13:27:01.549874] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:12.442 [2024-11-17 13:27:01.550131] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:17:12.442 [2024-11-17 13:27:01.550342] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:17:12.442 [2024-11-17 13:27:01.550371] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:17:12.442 [2024-11-17 13:27:01.550548] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:12.442 13:27:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.442 13:27:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:12.442 13:27:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:12.442 13:27:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:12.442 13:27:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:12.442 13:27:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:12.442 13:27:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:12.442 13:27:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:12.443 13:27:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:12.443 13:27:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:12.443 13:27:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:12.443 13:27:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:12.443 13:27:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.443 13:27:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:12.443 13:27:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:12.443 13:27:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.443 13:27:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:12.443 "name": "raid_bdev1", 00:17:12.443 "uuid": "1e208572-47fa-4f0d-8ee3-b8ea70b0e44d", 00:17:12.443 "strip_size_kb": 0, 00:17:12.443 "state": "online", 00:17:12.443 "raid_level": "raid1", 00:17:12.443 "superblock": true, 00:17:12.443 "num_base_bdevs": 2, 00:17:12.443 "num_base_bdevs_discovered": 2, 00:17:12.443 "num_base_bdevs_operational": 2, 00:17:12.443 "base_bdevs_list": [ 00:17:12.443 { 00:17:12.443 "name": "spare", 00:17:12.443 "uuid": "9d769ae4-946c-5e94-8310-1aa77cf5c6ae", 00:17:12.443 "is_configured": true, 00:17:12.443 "data_offset": 256, 00:17:12.443 "data_size": 7936 00:17:12.443 }, 00:17:12.443 { 00:17:12.443 "name": "BaseBdev2", 00:17:12.443 "uuid": "c73f92f8-e62c-5f5a-94ce-a01c06c737e3", 00:17:12.443 "is_configured": true, 00:17:12.443 "data_offset": 256, 00:17:12.443 "data_size": 7936 00:17:12.443 } 00:17:12.443 ] 00:17:12.443 }' 00:17:12.443 13:27:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:12.443 13:27:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:13.013 13:27:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:13.013 13:27:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:13.013 13:27:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:13.013 13:27:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:13.013 13:27:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:13.013 13:27:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:13.013 13:27:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:13.013 13:27:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.013 13:27:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:13.013 13:27:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.013 13:27:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:13.013 "name": "raid_bdev1", 00:17:13.013 "uuid": "1e208572-47fa-4f0d-8ee3-b8ea70b0e44d", 00:17:13.013 "strip_size_kb": 0, 00:17:13.013 "state": "online", 00:17:13.013 "raid_level": "raid1", 00:17:13.013 "superblock": true, 00:17:13.013 "num_base_bdevs": 2, 00:17:13.013 "num_base_bdevs_discovered": 2, 00:17:13.013 "num_base_bdevs_operational": 2, 00:17:13.013 "base_bdevs_list": [ 00:17:13.013 { 00:17:13.013 "name": "spare", 00:17:13.013 "uuid": "9d769ae4-946c-5e94-8310-1aa77cf5c6ae", 00:17:13.013 "is_configured": true, 00:17:13.013 "data_offset": 256, 00:17:13.013 "data_size": 7936 00:17:13.013 }, 00:17:13.013 { 00:17:13.013 "name": "BaseBdev2", 00:17:13.013 "uuid": "c73f92f8-e62c-5f5a-94ce-a01c06c737e3", 00:17:13.013 "is_configured": true, 00:17:13.013 "data_offset": 256, 00:17:13.013 "data_size": 7936 00:17:13.013 } 00:17:13.013 ] 00:17:13.013 }' 00:17:13.013 13:27:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:13.013 13:27:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:13.013 13:27:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:13.013 13:27:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:13.013 13:27:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:13.013 13:27:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:17:13.013 13:27:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.013 13:27:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:13.013 13:27:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.013 13:27:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:17:13.013 13:27:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:13.013 13:27:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.013 13:27:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:13.013 [2024-11-17 13:27:02.230268] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:13.013 13:27:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.013 13:27:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:13.013 13:27:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:13.013 13:27:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:13.013 13:27:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:13.013 13:27:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:13.013 13:27:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:13.013 13:27:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:13.013 13:27:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:13.013 13:27:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:13.013 13:27:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:13.274 13:27:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:13.274 13:27:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:13.274 13:27:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.274 13:27:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:13.274 13:27:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.274 13:27:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:13.274 "name": "raid_bdev1", 00:17:13.274 "uuid": "1e208572-47fa-4f0d-8ee3-b8ea70b0e44d", 00:17:13.274 "strip_size_kb": 0, 00:17:13.274 "state": "online", 00:17:13.274 "raid_level": "raid1", 00:17:13.274 "superblock": true, 00:17:13.274 "num_base_bdevs": 2, 00:17:13.274 "num_base_bdevs_discovered": 1, 00:17:13.274 "num_base_bdevs_operational": 1, 00:17:13.274 "base_bdevs_list": [ 00:17:13.274 { 00:17:13.274 "name": null, 00:17:13.274 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:13.274 "is_configured": false, 00:17:13.274 "data_offset": 0, 00:17:13.274 "data_size": 7936 00:17:13.274 }, 00:17:13.274 { 00:17:13.274 "name": "BaseBdev2", 00:17:13.274 "uuid": "c73f92f8-e62c-5f5a-94ce-a01c06c737e3", 00:17:13.274 "is_configured": true, 00:17:13.274 "data_offset": 256, 00:17:13.274 "data_size": 7936 00:17:13.274 } 00:17:13.274 ] 00:17:13.274 }' 00:17:13.274 13:27:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:13.274 13:27:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:13.534 13:27:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:13.534 13:27:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.534 13:27:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:13.534 [2024-11-17 13:27:02.693478] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:13.534 [2024-11-17 13:27:02.693664] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:13.534 [2024-11-17 13:27:02.693683] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:13.534 [2024-11-17 13:27:02.693716] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:13.534 [2024-11-17 13:27:02.709027] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:17:13.534 13:27:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.534 13:27:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@757 -- # sleep 1 00:17:13.534 [2024-11-17 13:27:02.710883] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:14.915 13:27:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:14.915 13:27:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:14.915 13:27:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:14.915 13:27:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:14.915 13:27:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:14.915 13:27:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:14.915 13:27:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.915 13:27:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:14.915 13:27:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:14.915 13:27:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.915 13:27:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:14.915 "name": "raid_bdev1", 00:17:14.915 "uuid": "1e208572-47fa-4f0d-8ee3-b8ea70b0e44d", 00:17:14.915 "strip_size_kb": 0, 00:17:14.915 "state": "online", 00:17:14.915 "raid_level": "raid1", 00:17:14.915 "superblock": true, 00:17:14.915 "num_base_bdevs": 2, 00:17:14.915 "num_base_bdevs_discovered": 2, 00:17:14.915 "num_base_bdevs_operational": 2, 00:17:14.915 "process": { 00:17:14.915 "type": "rebuild", 00:17:14.915 "target": "spare", 00:17:14.915 "progress": { 00:17:14.915 "blocks": 2560, 00:17:14.915 "percent": 32 00:17:14.915 } 00:17:14.915 }, 00:17:14.915 "base_bdevs_list": [ 00:17:14.916 { 00:17:14.916 "name": "spare", 00:17:14.916 "uuid": "9d769ae4-946c-5e94-8310-1aa77cf5c6ae", 00:17:14.916 "is_configured": true, 00:17:14.916 "data_offset": 256, 00:17:14.916 "data_size": 7936 00:17:14.916 }, 00:17:14.916 { 00:17:14.916 "name": "BaseBdev2", 00:17:14.916 "uuid": "c73f92f8-e62c-5f5a-94ce-a01c06c737e3", 00:17:14.916 "is_configured": true, 00:17:14.916 "data_offset": 256, 00:17:14.916 "data_size": 7936 00:17:14.916 } 00:17:14.916 ] 00:17:14.916 }' 00:17:14.916 13:27:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:14.916 13:27:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:14.916 13:27:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:14.916 13:27:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:14.916 13:27:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:17:14.916 13:27:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.916 13:27:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:14.916 [2024-11-17 13:27:03.878758] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:14.916 [2024-11-17 13:27:03.915555] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:14.916 [2024-11-17 13:27:03.915629] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:14.916 [2024-11-17 13:27:03.915645] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:14.916 [2024-11-17 13:27:03.915654] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:14.916 13:27:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.916 13:27:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:14.916 13:27:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:14.916 13:27:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:14.916 13:27:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:14.916 13:27:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:14.916 13:27:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:14.916 13:27:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:14.916 13:27:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:14.916 13:27:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:14.916 13:27:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:14.916 13:27:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:14.916 13:27:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:14.916 13:27:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.916 13:27:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:14.916 13:27:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.916 13:27:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:14.916 "name": "raid_bdev1", 00:17:14.916 "uuid": "1e208572-47fa-4f0d-8ee3-b8ea70b0e44d", 00:17:14.916 "strip_size_kb": 0, 00:17:14.916 "state": "online", 00:17:14.916 "raid_level": "raid1", 00:17:14.916 "superblock": true, 00:17:14.916 "num_base_bdevs": 2, 00:17:14.916 "num_base_bdevs_discovered": 1, 00:17:14.916 "num_base_bdevs_operational": 1, 00:17:14.916 "base_bdevs_list": [ 00:17:14.916 { 00:17:14.916 "name": null, 00:17:14.916 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:14.916 "is_configured": false, 00:17:14.916 "data_offset": 0, 00:17:14.916 "data_size": 7936 00:17:14.916 }, 00:17:14.916 { 00:17:14.916 "name": "BaseBdev2", 00:17:14.916 "uuid": "c73f92f8-e62c-5f5a-94ce-a01c06c737e3", 00:17:14.916 "is_configured": true, 00:17:14.916 "data_offset": 256, 00:17:14.916 "data_size": 7936 00:17:14.916 } 00:17:14.916 ] 00:17:14.916 }' 00:17:14.916 13:27:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:14.916 13:27:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:15.486 13:27:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:15.486 13:27:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.486 13:27:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:15.486 [2024-11-17 13:27:04.434074] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:15.486 [2024-11-17 13:27:04.434136] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:15.486 [2024-11-17 13:27:04.434158] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:17:15.486 [2024-11-17 13:27:04.434168] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:15.486 [2024-11-17 13:27:04.434654] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:15.486 [2024-11-17 13:27:04.434686] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:15.486 [2024-11-17 13:27:04.434797] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:15.486 [2024-11-17 13:27:04.434835] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:15.486 [2024-11-17 13:27:04.434846] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:15.486 [2024-11-17 13:27:04.434874] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:15.486 [2024-11-17 13:27:04.450631] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:17:15.486 spare 00:17:15.486 13:27:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.486 13:27:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@764 -- # sleep 1 00:17:15.486 [2024-11-17 13:27:04.452742] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:16.425 13:27:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:16.425 13:27:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:16.425 13:27:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:16.425 13:27:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:16.425 13:27:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:16.425 13:27:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:16.425 13:27:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.425 13:27:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:16.425 13:27:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:16.425 13:27:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.425 13:27:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:16.425 "name": "raid_bdev1", 00:17:16.425 "uuid": "1e208572-47fa-4f0d-8ee3-b8ea70b0e44d", 00:17:16.425 "strip_size_kb": 0, 00:17:16.425 "state": "online", 00:17:16.425 "raid_level": "raid1", 00:17:16.425 "superblock": true, 00:17:16.425 "num_base_bdevs": 2, 00:17:16.425 "num_base_bdevs_discovered": 2, 00:17:16.425 "num_base_bdevs_operational": 2, 00:17:16.425 "process": { 00:17:16.425 "type": "rebuild", 00:17:16.425 "target": "spare", 00:17:16.425 "progress": { 00:17:16.425 "blocks": 2560, 00:17:16.425 "percent": 32 00:17:16.426 } 00:17:16.426 }, 00:17:16.426 "base_bdevs_list": [ 00:17:16.426 { 00:17:16.426 "name": "spare", 00:17:16.426 "uuid": "9d769ae4-946c-5e94-8310-1aa77cf5c6ae", 00:17:16.426 "is_configured": true, 00:17:16.426 "data_offset": 256, 00:17:16.426 "data_size": 7936 00:17:16.426 }, 00:17:16.426 { 00:17:16.426 "name": "BaseBdev2", 00:17:16.426 "uuid": "c73f92f8-e62c-5f5a-94ce-a01c06c737e3", 00:17:16.426 "is_configured": true, 00:17:16.426 "data_offset": 256, 00:17:16.426 "data_size": 7936 00:17:16.426 } 00:17:16.426 ] 00:17:16.426 }' 00:17:16.426 13:27:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:16.426 13:27:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:16.426 13:27:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:16.426 13:27:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:16.426 13:27:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:17:16.426 13:27:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.426 13:27:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:16.426 [2024-11-17 13:27:05.595904] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:16.686 [2024-11-17 13:27:05.657965] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:16.686 [2024-11-17 13:27:05.658039] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:16.686 [2024-11-17 13:27:05.658056] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:16.686 [2024-11-17 13:27:05.658063] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:16.686 13:27:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.686 13:27:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:16.686 13:27:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:16.686 13:27:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:16.686 13:27:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:16.686 13:27:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:16.686 13:27:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:16.686 13:27:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:16.686 13:27:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:16.686 13:27:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:16.686 13:27:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:16.686 13:27:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:16.686 13:27:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:16.686 13:27:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.686 13:27:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:16.686 13:27:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.686 13:27:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:16.686 "name": "raid_bdev1", 00:17:16.686 "uuid": "1e208572-47fa-4f0d-8ee3-b8ea70b0e44d", 00:17:16.686 "strip_size_kb": 0, 00:17:16.686 "state": "online", 00:17:16.686 "raid_level": "raid1", 00:17:16.686 "superblock": true, 00:17:16.686 "num_base_bdevs": 2, 00:17:16.686 "num_base_bdevs_discovered": 1, 00:17:16.686 "num_base_bdevs_operational": 1, 00:17:16.686 "base_bdevs_list": [ 00:17:16.686 { 00:17:16.686 "name": null, 00:17:16.686 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:16.686 "is_configured": false, 00:17:16.686 "data_offset": 0, 00:17:16.686 "data_size": 7936 00:17:16.686 }, 00:17:16.686 { 00:17:16.686 "name": "BaseBdev2", 00:17:16.686 "uuid": "c73f92f8-e62c-5f5a-94ce-a01c06c737e3", 00:17:16.686 "is_configured": true, 00:17:16.686 "data_offset": 256, 00:17:16.686 "data_size": 7936 00:17:16.686 } 00:17:16.686 ] 00:17:16.686 }' 00:17:16.686 13:27:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:16.686 13:27:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:16.946 13:27:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:16.946 13:27:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:16.946 13:27:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:16.946 13:27:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:16.946 13:27:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:16.946 13:27:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:16.946 13:27:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:16.946 13:27:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.946 13:27:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:16.947 13:27:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.947 13:27:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:16.947 "name": "raid_bdev1", 00:17:16.947 "uuid": "1e208572-47fa-4f0d-8ee3-b8ea70b0e44d", 00:17:16.947 "strip_size_kb": 0, 00:17:16.947 "state": "online", 00:17:16.947 "raid_level": "raid1", 00:17:16.947 "superblock": true, 00:17:16.947 "num_base_bdevs": 2, 00:17:16.947 "num_base_bdevs_discovered": 1, 00:17:16.947 "num_base_bdevs_operational": 1, 00:17:16.947 "base_bdevs_list": [ 00:17:16.947 { 00:17:16.947 "name": null, 00:17:16.947 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:16.947 "is_configured": false, 00:17:16.947 "data_offset": 0, 00:17:16.947 "data_size": 7936 00:17:16.947 }, 00:17:16.947 { 00:17:16.947 "name": "BaseBdev2", 00:17:16.947 "uuid": "c73f92f8-e62c-5f5a-94ce-a01c06c737e3", 00:17:16.947 "is_configured": true, 00:17:16.947 "data_offset": 256, 00:17:16.947 "data_size": 7936 00:17:16.947 } 00:17:16.947 ] 00:17:16.947 }' 00:17:16.947 13:27:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:17.210 13:27:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:17.210 13:27:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:17.210 13:27:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:17.210 13:27:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:17:17.210 13:27:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.210 13:27:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:17.210 13:27:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.210 13:27:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:17.210 13:27:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.210 13:27:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:17.210 [2024-11-17 13:27:06.231991] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:17.210 [2024-11-17 13:27:06.232066] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:17.210 [2024-11-17 13:27:06.232089] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:17:17.210 [2024-11-17 13:27:06.232107] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:17.210 [2024-11-17 13:27:06.232565] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:17.210 [2024-11-17 13:27:06.232592] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:17.210 [2024-11-17 13:27:06.232676] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:17:17.210 [2024-11-17 13:27:06.232696] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:17.210 [2024-11-17 13:27:06.232705] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:17.210 [2024-11-17 13:27:06.232715] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:17:17.210 BaseBdev1 00:17:17.210 13:27:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.210 13:27:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@775 -- # sleep 1 00:17:18.155 13:27:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:18.155 13:27:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:18.155 13:27:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:18.155 13:27:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:18.155 13:27:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:18.155 13:27:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:18.155 13:27:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:18.155 13:27:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:18.155 13:27:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:18.155 13:27:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:18.155 13:27:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:18.155 13:27:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:18.155 13:27:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.155 13:27:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:18.155 13:27:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.155 13:27:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:18.155 "name": "raid_bdev1", 00:17:18.155 "uuid": "1e208572-47fa-4f0d-8ee3-b8ea70b0e44d", 00:17:18.155 "strip_size_kb": 0, 00:17:18.155 "state": "online", 00:17:18.155 "raid_level": "raid1", 00:17:18.155 "superblock": true, 00:17:18.155 "num_base_bdevs": 2, 00:17:18.155 "num_base_bdevs_discovered": 1, 00:17:18.155 "num_base_bdevs_operational": 1, 00:17:18.155 "base_bdevs_list": [ 00:17:18.155 { 00:17:18.155 "name": null, 00:17:18.155 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:18.155 "is_configured": false, 00:17:18.155 "data_offset": 0, 00:17:18.155 "data_size": 7936 00:17:18.155 }, 00:17:18.155 { 00:17:18.155 "name": "BaseBdev2", 00:17:18.155 "uuid": "c73f92f8-e62c-5f5a-94ce-a01c06c737e3", 00:17:18.155 "is_configured": true, 00:17:18.155 "data_offset": 256, 00:17:18.155 "data_size": 7936 00:17:18.155 } 00:17:18.155 ] 00:17:18.155 }' 00:17:18.155 13:27:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:18.155 13:27:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:18.730 13:27:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:18.730 13:27:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:18.730 13:27:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:18.730 13:27:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:18.730 13:27:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:18.730 13:27:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:18.730 13:27:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.730 13:27:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:18.730 13:27:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:18.730 13:27:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.730 13:27:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:18.730 "name": "raid_bdev1", 00:17:18.730 "uuid": "1e208572-47fa-4f0d-8ee3-b8ea70b0e44d", 00:17:18.730 "strip_size_kb": 0, 00:17:18.730 "state": "online", 00:17:18.730 "raid_level": "raid1", 00:17:18.730 "superblock": true, 00:17:18.730 "num_base_bdevs": 2, 00:17:18.730 "num_base_bdevs_discovered": 1, 00:17:18.730 "num_base_bdevs_operational": 1, 00:17:18.730 "base_bdevs_list": [ 00:17:18.730 { 00:17:18.730 "name": null, 00:17:18.730 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:18.730 "is_configured": false, 00:17:18.730 "data_offset": 0, 00:17:18.730 "data_size": 7936 00:17:18.730 }, 00:17:18.730 { 00:17:18.730 "name": "BaseBdev2", 00:17:18.730 "uuid": "c73f92f8-e62c-5f5a-94ce-a01c06c737e3", 00:17:18.730 "is_configured": true, 00:17:18.730 "data_offset": 256, 00:17:18.730 "data_size": 7936 00:17:18.730 } 00:17:18.730 ] 00:17:18.730 }' 00:17:18.730 13:27:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:18.730 13:27:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:18.730 13:27:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:18.730 13:27:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:18.730 13:27:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:18.730 13:27:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@652 -- # local es=0 00:17:18.730 13:27:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:18.730 13:27:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:18.730 13:27:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:18.730 13:27:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:18.730 13:27:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:18.730 13:27:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:18.730 13:27:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.730 13:27:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:18.730 [2024-11-17 13:27:07.825501] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:18.730 [2024-11-17 13:27:07.825677] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:18.730 [2024-11-17 13:27:07.825692] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:18.730 request: 00:17:18.730 { 00:17:18.730 "base_bdev": "BaseBdev1", 00:17:18.730 "raid_bdev": "raid_bdev1", 00:17:18.730 "method": "bdev_raid_add_base_bdev", 00:17:18.730 "req_id": 1 00:17:18.730 } 00:17:18.730 Got JSON-RPC error response 00:17:18.730 response: 00:17:18.730 { 00:17:18.730 "code": -22, 00:17:18.730 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:17:18.730 } 00:17:18.730 13:27:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:18.730 13:27:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@655 -- # es=1 00:17:18.730 13:27:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:18.730 13:27:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:18.730 13:27:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:18.730 13:27:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@779 -- # sleep 1 00:17:19.670 13:27:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:19.670 13:27:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:19.670 13:27:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:19.670 13:27:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:19.670 13:27:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:19.670 13:27:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:19.670 13:27:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:19.670 13:27:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:19.670 13:27:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:19.670 13:27:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:19.670 13:27:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:19.670 13:27:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:19.670 13:27:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.670 13:27:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:19.670 13:27:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.670 13:27:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:19.670 "name": "raid_bdev1", 00:17:19.670 "uuid": "1e208572-47fa-4f0d-8ee3-b8ea70b0e44d", 00:17:19.670 "strip_size_kb": 0, 00:17:19.670 "state": "online", 00:17:19.670 "raid_level": "raid1", 00:17:19.670 "superblock": true, 00:17:19.670 "num_base_bdevs": 2, 00:17:19.670 "num_base_bdevs_discovered": 1, 00:17:19.670 "num_base_bdevs_operational": 1, 00:17:19.670 "base_bdevs_list": [ 00:17:19.670 { 00:17:19.670 "name": null, 00:17:19.670 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:19.670 "is_configured": false, 00:17:19.670 "data_offset": 0, 00:17:19.670 "data_size": 7936 00:17:19.670 }, 00:17:19.670 { 00:17:19.670 "name": "BaseBdev2", 00:17:19.670 "uuid": "c73f92f8-e62c-5f5a-94ce-a01c06c737e3", 00:17:19.670 "is_configured": true, 00:17:19.670 "data_offset": 256, 00:17:19.670 "data_size": 7936 00:17:19.670 } 00:17:19.670 ] 00:17:19.670 }' 00:17:19.670 13:27:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:19.670 13:27:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:20.238 13:27:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:20.238 13:27:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:20.238 13:27:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:20.238 13:27:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:20.238 13:27:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:20.238 13:27:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:20.238 13:27:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.238 13:27:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:20.238 13:27:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:20.238 13:27:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.238 13:27:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:20.238 "name": "raid_bdev1", 00:17:20.238 "uuid": "1e208572-47fa-4f0d-8ee3-b8ea70b0e44d", 00:17:20.239 "strip_size_kb": 0, 00:17:20.239 "state": "online", 00:17:20.239 "raid_level": "raid1", 00:17:20.239 "superblock": true, 00:17:20.239 "num_base_bdevs": 2, 00:17:20.239 "num_base_bdevs_discovered": 1, 00:17:20.239 "num_base_bdevs_operational": 1, 00:17:20.239 "base_bdevs_list": [ 00:17:20.239 { 00:17:20.239 "name": null, 00:17:20.239 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:20.239 "is_configured": false, 00:17:20.239 "data_offset": 0, 00:17:20.239 "data_size": 7936 00:17:20.239 }, 00:17:20.239 { 00:17:20.239 "name": "BaseBdev2", 00:17:20.239 "uuid": "c73f92f8-e62c-5f5a-94ce-a01c06c737e3", 00:17:20.239 "is_configured": true, 00:17:20.239 "data_offset": 256, 00:17:20.239 "data_size": 7936 00:17:20.239 } 00:17:20.239 ] 00:17:20.239 }' 00:17:20.239 13:27:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:20.239 13:27:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:20.239 13:27:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:20.239 13:27:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:20.239 13:27:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@784 -- # killprocess 86365 00:17:20.239 13:27:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@954 -- # '[' -z 86365 ']' 00:17:20.239 13:27:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@958 -- # kill -0 86365 00:17:20.239 13:27:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@959 -- # uname 00:17:20.239 13:27:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:20.239 13:27:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86365 00:17:20.239 13:27:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:20.239 13:27:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:20.239 killing process with pid 86365 00:17:20.239 13:27:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86365' 00:17:20.239 13:27:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@973 -- # kill 86365 00:17:20.239 Received shutdown signal, test time was about 60.000000 seconds 00:17:20.239 00:17:20.239 Latency(us) 00:17:20.239 [2024-11-17T13:27:09.463Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:20.239 [2024-11-17T13:27:09.463Z] =================================================================================================================== 00:17:20.239 [2024-11-17T13:27:09.463Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:20.239 [2024-11-17 13:27:09.463500] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:20.239 [2024-11-17 13:27:09.463651] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:20.239 13:27:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@978 -- # wait 86365 00:17:20.239 [2024-11-17 13:27:09.463712] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:20.239 [2024-11-17 13:27:09.463725] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:17:20.807 [2024-11-17 13:27:09.758989] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:21.743 13:27:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@786 -- # return 0 00:17:21.743 00:17:21.743 real 0m19.802s 00:17:21.743 user 0m25.747s 00:17:21.743 sys 0m2.748s 00:17:21.743 13:27:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:21.743 13:27:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:21.743 ************************************ 00:17:21.743 END TEST raid_rebuild_test_sb_4k 00:17:21.743 ************************************ 00:17:21.743 13:27:10 bdev_raid -- bdev/bdev_raid.sh@1003 -- # base_malloc_params='-m 32' 00:17:21.743 13:27:10 bdev_raid -- bdev/bdev_raid.sh@1004 -- # run_test raid_state_function_test_sb_md_separate raid_state_function_test raid1 2 true 00:17:21.743 13:27:10 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:17:21.743 13:27:10 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:21.743 13:27:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:21.743 ************************************ 00:17:21.743 START TEST raid_state_function_test_sb_md_separate 00:17:21.743 ************************************ 00:17:21.743 13:27:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:17:21.743 13:27:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:17:21.743 13:27:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:17:21.743 13:27:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:17:21.743 13:27:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:17:21.743 13:27:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:17:21.743 13:27:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:21.743 13:27:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:17:21.743 13:27:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:21.743 13:27:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:21.743 13:27:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:17:21.743 13:27:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:21.743 13:27:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:21.743 13:27:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:21.744 13:27:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:17:21.744 13:27:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:17:21.744 13:27:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # local strip_size 00:17:21.744 13:27:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:17:21.744 13:27:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:17:21.744 13:27:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:17:21.744 13:27:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:17:21.744 13:27:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:17:21.744 13:27:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:17:21.744 13:27:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@229 -- # raid_pid=87051 00:17:21.744 13:27:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:17:21.744 13:27:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 87051' 00:17:21.744 Process raid pid: 87051 00:17:21.744 13:27:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@231 -- # waitforlisten 87051 00:17:21.744 13:27:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@835 -- # '[' -z 87051 ']' 00:17:21.744 13:27:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:21.744 13:27:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:21.744 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:21.744 13:27:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:21.744 13:27:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:21.744 13:27:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:22.003 [2024-11-17 13:27:11.025377] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:17:22.003 [2024-11-17 13:27:11.025500] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:22.003 [2024-11-17 13:27:11.199827] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:22.262 [2024-11-17 13:27:11.317467] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:22.520 [2024-11-17 13:27:11.527038] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:22.520 [2024-11-17 13:27:11.527074] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:22.779 13:27:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:22.779 13:27:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@868 -- # return 0 00:17:22.779 13:27:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:22.779 13:27:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.779 13:27:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:22.779 [2024-11-17 13:27:11.849923] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:22.779 [2024-11-17 13:27:11.849981] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:22.779 [2024-11-17 13:27:11.849992] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:22.779 [2024-11-17 13:27:11.850002] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:22.779 13:27:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.779 13:27:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:22.779 13:27:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:22.779 13:27:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:22.779 13:27:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:22.779 13:27:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:22.779 13:27:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:22.779 13:27:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:22.779 13:27:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:22.779 13:27:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:22.779 13:27:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:22.779 13:27:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:22.779 13:27:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:22.779 13:27:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.779 13:27:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:22.779 13:27:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.779 13:27:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:22.779 "name": "Existed_Raid", 00:17:22.779 "uuid": "25fb0ca2-eafb-4446-8ef1-5abbb0660a24", 00:17:22.779 "strip_size_kb": 0, 00:17:22.779 "state": "configuring", 00:17:22.779 "raid_level": "raid1", 00:17:22.779 "superblock": true, 00:17:22.779 "num_base_bdevs": 2, 00:17:22.779 "num_base_bdevs_discovered": 0, 00:17:22.779 "num_base_bdevs_operational": 2, 00:17:22.779 "base_bdevs_list": [ 00:17:22.779 { 00:17:22.779 "name": "BaseBdev1", 00:17:22.779 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:22.779 "is_configured": false, 00:17:22.779 "data_offset": 0, 00:17:22.779 "data_size": 0 00:17:22.779 }, 00:17:22.779 { 00:17:22.779 "name": "BaseBdev2", 00:17:22.779 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:22.779 "is_configured": false, 00:17:22.779 "data_offset": 0, 00:17:22.779 "data_size": 0 00:17:22.779 } 00:17:22.779 ] 00:17:22.779 }' 00:17:22.779 13:27:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:22.779 13:27:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:23.348 13:27:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:23.348 13:27:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.348 13:27:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:23.348 [2024-11-17 13:27:12.297092] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:23.348 [2024-11-17 13:27:12.297135] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:17:23.348 13:27:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.348 13:27:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:23.348 13:27:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.348 13:27:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:23.348 [2024-11-17 13:27:12.309060] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:23.348 [2024-11-17 13:27:12.309103] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:23.348 [2024-11-17 13:27:12.309112] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:23.348 [2024-11-17 13:27:12.309124] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:23.348 13:27:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.348 13:27:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1 00:17:23.348 13:27:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.348 13:27:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:23.348 [2024-11-17 13:27:12.356718] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:23.348 BaseBdev1 00:17:23.348 13:27:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.348 13:27:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:17:23.348 13:27:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:17:23.348 13:27:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:23.348 13:27:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # local i 00:17:23.348 13:27:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:23.348 13:27:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:23.348 13:27:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:23.348 13:27:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.348 13:27:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:23.348 13:27:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.348 13:27:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:23.348 13:27:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.348 13:27:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:23.348 [ 00:17:23.348 { 00:17:23.348 "name": "BaseBdev1", 00:17:23.348 "aliases": [ 00:17:23.348 "ccd0fbc7-7661-47ba-9bb1-1525a612ee16" 00:17:23.348 ], 00:17:23.348 "product_name": "Malloc disk", 00:17:23.348 "block_size": 4096, 00:17:23.348 "num_blocks": 8192, 00:17:23.348 "uuid": "ccd0fbc7-7661-47ba-9bb1-1525a612ee16", 00:17:23.348 "md_size": 32, 00:17:23.348 "md_interleave": false, 00:17:23.348 "dif_type": 0, 00:17:23.348 "assigned_rate_limits": { 00:17:23.348 "rw_ios_per_sec": 0, 00:17:23.348 "rw_mbytes_per_sec": 0, 00:17:23.348 "r_mbytes_per_sec": 0, 00:17:23.348 "w_mbytes_per_sec": 0 00:17:23.348 }, 00:17:23.348 "claimed": true, 00:17:23.348 "claim_type": "exclusive_write", 00:17:23.348 "zoned": false, 00:17:23.348 "supported_io_types": { 00:17:23.348 "read": true, 00:17:23.348 "write": true, 00:17:23.348 "unmap": true, 00:17:23.348 "flush": true, 00:17:23.348 "reset": true, 00:17:23.348 "nvme_admin": false, 00:17:23.348 "nvme_io": false, 00:17:23.348 "nvme_io_md": false, 00:17:23.348 "write_zeroes": true, 00:17:23.348 "zcopy": true, 00:17:23.348 "get_zone_info": false, 00:17:23.348 "zone_management": false, 00:17:23.348 "zone_append": false, 00:17:23.348 "compare": false, 00:17:23.348 "compare_and_write": false, 00:17:23.348 "abort": true, 00:17:23.348 "seek_hole": false, 00:17:23.348 "seek_data": false, 00:17:23.348 "copy": true, 00:17:23.348 "nvme_iov_md": false 00:17:23.348 }, 00:17:23.348 "memory_domains": [ 00:17:23.348 { 00:17:23.348 "dma_device_id": "system", 00:17:23.348 "dma_device_type": 1 00:17:23.348 }, 00:17:23.348 { 00:17:23.348 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:23.348 "dma_device_type": 2 00:17:23.348 } 00:17:23.348 ], 00:17:23.348 "driver_specific": {} 00:17:23.348 } 00:17:23.348 ] 00:17:23.348 13:27:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.348 13:27:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@911 -- # return 0 00:17:23.348 13:27:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:23.348 13:27:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:23.348 13:27:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:23.348 13:27:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:23.348 13:27:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:23.348 13:27:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:23.348 13:27:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:23.348 13:27:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:23.348 13:27:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:23.348 13:27:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:23.348 13:27:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:23.348 13:27:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:23.348 13:27:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.348 13:27:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:23.348 13:27:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.348 13:27:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:23.348 "name": "Existed_Raid", 00:17:23.348 "uuid": "3cd5a960-497c-449d-9940-7d81d2ef1714", 00:17:23.348 "strip_size_kb": 0, 00:17:23.348 "state": "configuring", 00:17:23.348 "raid_level": "raid1", 00:17:23.348 "superblock": true, 00:17:23.348 "num_base_bdevs": 2, 00:17:23.348 "num_base_bdevs_discovered": 1, 00:17:23.348 "num_base_bdevs_operational": 2, 00:17:23.348 "base_bdevs_list": [ 00:17:23.348 { 00:17:23.348 "name": "BaseBdev1", 00:17:23.348 "uuid": "ccd0fbc7-7661-47ba-9bb1-1525a612ee16", 00:17:23.348 "is_configured": true, 00:17:23.348 "data_offset": 256, 00:17:23.348 "data_size": 7936 00:17:23.348 }, 00:17:23.348 { 00:17:23.348 "name": "BaseBdev2", 00:17:23.348 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:23.348 "is_configured": false, 00:17:23.348 "data_offset": 0, 00:17:23.348 "data_size": 0 00:17:23.348 } 00:17:23.348 ] 00:17:23.348 }' 00:17:23.348 13:27:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:23.348 13:27:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:23.607 13:27:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:23.607 13:27:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.607 13:27:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:23.608 [2024-11-17 13:27:12.820044] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:23.608 [2024-11-17 13:27:12.820166] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:17:23.608 13:27:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.608 13:27:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:23.608 13:27:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.608 13:27:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:23.608 [2024-11-17 13:27:12.832039] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:23.867 [2024-11-17 13:27:12.833917] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:23.867 [2024-11-17 13:27:12.833993] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:23.867 13:27:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.867 13:27:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:17:23.867 13:27:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:23.867 13:27:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:23.867 13:27:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:23.867 13:27:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:23.867 13:27:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:23.867 13:27:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:23.867 13:27:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:23.867 13:27:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:23.867 13:27:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:23.867 13:27:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:23.867 13:27:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:23.867 13:27:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:23.867 13:27:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:23.867 13:27:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.867 13:27:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:23.867 13:27:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.867 13:27:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:23.867 "name": "Existed_Raid", 00:17:23.867 "uuid": "86d7021b-eee5-4db6-89b3-5485e48fe752", 00:17:23.867 "strip_size_kb": 0, 00:17:23.867 "state": "configuring", 00:17:23.867 "raid_level": "raid1", 00:17:23.867 "superblock": true, 00:17:23.867 "num_base_bdevs": 2, 00:17:23.867 "num_base_bdevs_discovered": 1, 00:17:23.867 "num_base_bdevs_operational": 2, 00:17:23.867 "base_bdevs_list": [ 00:17:23.867 { 00:17:23.867 "name": "BaseBdev1", 00:17:23.867 "uuid": "ccd0fbc7-7661-47ba-9bb1-1525a612ee16", 00:17:23.867 "is_configured": true, 00:17:23.867 "data_offset": 256, 00:17:23.867 "data_size": 7936 00:17:23.867 }, 00:17:23.867 { 00:17:23.867 "name": "BaseBdev2", 00:17:23.867 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:23.867 "is_configured": false, 00:17:23.867 "data_offset": 0, 00:17:23.867 "data_size": 0 00:17:23.867 } 00:17:23.867 ] 00:17:23.867 }' 00:17:23.867 13:27:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:23.867 13:27:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:24.126 13:27:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2 00:17:24.126 13:27:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.126 13:27:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:24.126 [2024-11-17 13:27:13.270644] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:24.126 [2024-11-17 13:27:13.270953] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:24.126 [2024-11-17 13:27:13.270972] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:24.126 [2024-11-17 13:27:13.271056] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:17:24.126 [2024-11-17 13:27:13.271175] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:24.126 [2024-11-17 13:27:13.271184] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:17:24.126 [2024-11-17 13:27:13.271295] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:24.126 BaseBdev2 00:17:24.126 13:27:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.127 13:27:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:17:24.127 13:27:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:17:24.127 13:27:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:24.127 13:27:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # local i 00:17:24.127 13:27:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:24.127 13:27:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:24.127 13:27:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:24.127 13:27:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.127 13:27:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:24.127 13:27:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.127 13:27:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:24.127 13:27:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.127 13:27:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:24.127 [ 00:17:24.127 { 00:17:24.127 "name": "BaseBdev2", 00:17:24.127 "aliases": [ 00:17:24.127 "d37d0702-7b04-4f3b-868a-f1ea94ae04c8" 00:17:24.127 ], 00:17:24.127 "product_name": "Malloc disk", 00:17:24.127 "block_size": 4096, 00:17:24.127 "num_blocks": 8192, 00:17:24.127 "uuid": "d37d0702-7b04-4f3b-868a-f1ea94ae04c8", 00:17:24.127 "md_size": 32, 00:17:24.127 "md_interleave": false, 00:17:24.127 "dif_type": 0, 00:17:24.127 "assigned_rate_limits": { 00:17:24.127 "rw_ios_per_sec": 0, 00:17:24.127 "rw_mbytes_per_sec": 0, 00:17:24.127 "r_mbytes_per_sec": 0, 00:17:24.127 "w_mbytes_per_sec": 0 00:17:24.127 }, 00:17:24.127 "claimed": true, 00:17:24.127 "claim_type": "exclusive_write", 00:17:24.127 "zoned": false, 00:17:24.127 "supported_io_types": { 00:17:24.127 "read": true, 00:17:24.127 "write": true, 00:17:24.127 "unmap": true, 00:17:24.127 "flush": true, 00:17:24.127 "reset": true, 00:17:24.127 "nvme_admin": false, 00:17:24.127 "nvme_io": false, 00:17:24.127 "nvme_io_md": false, 00:17:24.127 "write_zeroes": true, 00:17:24.127 "zcopy": true, 00:17:24.127 "get_zone_info": false, 00:17:24.127 "zone_management": false, 00:17:24.127 "zone_append": false, 00:17:24.127 "compare": false, 00:17:24.127 "compare_and_write": false, 00:17:24.127 "abort": true, 00:17:24.127 "seek_hole": false, 00:17:24.127 "seek_data": false, 00:17:24.127 "copy": true, 00:17:24.127 "nvme_iov_md": false 00:17:24.127 }, 00:17:24.127 "memory_domains": [ 00:17:24.127 { 00:17:24.127 "dma_device_id": "system", 00:17:24.127 "dma_device_type": 1 00:17:24.127 }, 00:17:24.127 { 00:17:24.127 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:24.127 "dma_device_type": 2 00:17:24.127 } 00:17:24.127 ], 00:17:24.127 "driver_specific": {} 00:17:24.127 } 00:17:24.127 ] 00:17:24.127 13:27:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.127 13:27:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@911 -- # return 0 00:17:24.127 13:27:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:24.127 13:27:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:24.127 13:27:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:17:24.127 13:27:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:24.127 13:27:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:24.127 13:27:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:24.127 13:27:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:24.127 13:27:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:24.127 13:27:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:24.127 13:27:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:24.127 13:27:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:24.127 13:27:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:24.127 13:27:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:24.127 13:27:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:24.127 13:27:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.127 13:27:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:24.127 13:27:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.386 13:27:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:24.386 "name": "Existed_Raid", 00:17:24.386 "uuid": "86d7021b-eee5-4db6-89b3-5485e48fe752", 00:17:24.386 "strip_size_kb": 0, 00:17:24.386 "state": "online", 00:17:24.386 "raid_level": "raid1", 00:17:24.386 "superblock": true, 00:17:24.386 "num_base_bdevs": 2, 00:17:24.386 "num_base_bdevs_discovered": 2, 00:17:24.386 "num_base_bdevs_operational": 2, 00:17:24.386 "base_bdevs_list": [ 00:17:24.386 { 00:17:24.386 "name": "BaseBdev1", 00:17:24.386 "uuid": "ccd0fbc7-7661-47ba-9bb1-1525a612ee16", 00:17:24.386 "is_configured": true, 00:17:24.386 "data_offset": 256, 00:17:24.386 "data_size": 7936 00:17:24.386 }, 00:17:24.386 { 00:17:24.386 "name": "BaseBdev2", 00:17:24.386 "uuid": "d37d0702-7b04-4f3b-868a-f1ea94ae04c8", 00:17:24.386 "is_configured": true, 00:17:24.386 "data_offset": 256, 00:17:24.386 "data_size": 7936 00:17:24.386 } 00:17:24.386 ] 00:17:24.386 }' 00:17:24.386 13:27:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:24.386 13:27:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:24.646 13:27:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:17:24.646 13:27:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:24.646 13:27:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:24.646 13:27:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:24.646 13:27:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:17:24.646 13:27:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:24.646 13:27:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:24.646 13:27:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:24.646 13:27:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.646 13:27:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:24.646 [2024-11-17 13:27:13.790059] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:24.646 13:27:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.646 13:27:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:24.646 "name": "Existed_Raid", 00:17:24.646 "aliases": [ 00:17:24.646 "86d7021b-eee5-4db6-89b3-5485e48fe752" 00:17:24.646 ], 00:17:24.646 "product_name": "Raid Volume", 00:17:24.646 "block_size": 4096, 00:17:24.646 "num_blocks": 7936, 00:17:24.646 "uuid": "86d7021b-eee5-4db6-89b3-5485e48fe752", 00:17:24.646 "md_size": 32, 00:17:24.646 "md_interleave": false, 00:17:24.646 "dif_type": 0, 00:17:24.646 "assigned_rate_limits": { 00:17:24.646 "rw_ios_per_sec": 0, 00:17:24.646 "rw_mbytes_per_sec": 0, 00:17:24.646 "r_mbytes_per_sec": 0, 00:17:24.646 "w_mbytes_per_sec": 0 00:17:24.646 }, 00:17:24.646 "claimed": false, 00:17:24.646 "zoned": false, 00:17:24.646 "supported_io_types": { 00:17:24.646 "read": true, 00:17:24.646 "write": true, 00:17:24.646 "unmap": false, 00:17:24.646 "flush": false, 00:17:24.646 "reset": true, 00:17:24.646 "nvme_admin": false, 00:17:24.646 "nvme_io": false, 00:17:24.646 "nvme_io_md": false, 00:17:24.646 "write_zeroes": true, 00:17:24.646 "zcopy": false, 00:17:24.646 "get_zone_info": false, 00:17:24.646 "zone_management": false, 00:17:24.646 "zone_append": false, 00:17:24.646 "compare": false, 00:17:24.646 "compare_and_write": false, 00:17:24.646 "abort": false, 00:17:24.646 "seek_hole": false, 00:17:24.646 "seek_data": false, 00:17:24.646 "copy": false, 00:17:24.646 "nvme_iov_md": false 00:17:24.646 }, 00:17:24.646 "memory_domains": [ 00:17:24.646 { 00:17:24.646 "dma_device_id": "system", 00:17:24.646 "dma_device_type": 1 00:17:24.646 }, 00:17:24.646 { 00:17:24.646 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:24.646 "dma_device_type": 2 00:17:24.646 }, 00:17:24.646 { 00:17:24.646 "dma_device_id": "system", 00:17:24.646 "dma_device_type": 1 00:17:24.646 }, 00:17:24.646 { 00:17:24.646 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:24.646 "dma_device_type": 2 00:17:24.646 } 00:17:24.646 ], 00:17:24.646 "driver_specific": { 00:17:24.646 "raid": { 00:17:24.646 "uuid": "86d7021b-eee5-4db6-89b3-5485e48fe752", 00:17:24.646 "strip_size_kb": 0, 00:17:24.646 "state": "online", 00:17:24.646 "raid_level": "raid1", 00:17:24.646 "superblock": true, 00:17:24.646 "num_base_bdevs": 2, 00:17:24.646 "num_base_bdevs_discovered": 2, 00:17:24.646 "num_base_bdevs_operational": 2, 00:17:24.646 "base_bdevs_list": [ 00:17:24.646 { 00:17:24.646 "name": "BaseBdev1", 00:17:24.646 "uuid": "ccd0fbc7-7661-47ba-9bb1-1525a612ee16", 00:17:24.646 "is_configured": true, 00:17:24.646 "data_offset": 256, 00:17:24.646 "data_size": 7936 00:17:24.646 }, 00:17:24.646 { 00:17:24.646 "name": "BaseBdev2", 00:17:24.646 "uuid": "d37d0702-7b04-4f3b-868a-f1ea94ae04c8", 00:17:24.646 "is_configured": true, 00:17:24.646 "data_offset": 256, 00:17:24.646 "data_size": 7936 00:17:24.646 } 00:17:24.646 ] 00:17:24.646 } 00:17:24.646 } 00:17:24.646 }' 00:17:24.646 13:27:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:24.905 13:27:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:17:24.905 BaseBdev2' 00:17:24.905 13:27:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:24.905 13:27:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:17:24.905 13:27:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:24.905 13:27:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:17:24.905 13:27:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.905 13:27:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:24.905 13:27:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:24.905 13:27:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.905 13:27:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:17:24.906 13:27:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:17:24.906 13:27:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:24.906 13:27:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:24.906 13:27:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:24.906 13:27:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.906 13:27:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:24.906 13:27:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.906 13:27:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:17:24.906 13:27:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:17:24.906 13:27:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:24.906 13:27:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.906 13:27:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:24.906 [2024-11-17 13:27:14.033432] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:24.906 13:27:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.906 13:27:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@260 -- # local expected_state 00:17:24.906 13:27:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:17:25.165 13:27:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:25.165 13:27:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:17:25.165 13:27:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:17:25.165 13:27:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:17:25.165 13:27:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:25.165 13:27:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:25.165 13:27:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:25.165 13:27:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:25.165 13:27:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:25.165 13:27:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:25.165 13:27:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:25.165 13:27:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:25.165 13:27:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:25.165 13:27:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:25.165 13:27:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.165 13:27:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:25.165 13:27:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:25.165 13:27:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.165 13:27:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:25.165 "name": "Existed_Raid", 00:17:25.165 "uuid": "86d7021b-eee5-4db6-89b3-5485e48fe752", 00:17:25.165 "strip_size_kb": 0, 00:17:25.165 "state": "online", 00:17:25.165 "raid_level": "raid1", 00:17:25.165 "superblock": true, 00:17:25.165 "num_base_bdevs": 2, 00:17:25.165 "num_base_bdevs_discovered": 1, 00:17:25.165 "num_base_bdevs_operational": 1, 00:17:25.165 "base_bdevs_list": [ 00:17:25.165 { 00:17:25.165 "name": null, 00:17:25.165 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:25.165 "is_configured": false, 00:17:25.165 "data_offset": 0, 00:17:25.165 "data_size": 7936 00:17:25.165 }, 00:17:25.165 { 00:17:25.165 "name": "BaseBdev2", 00:17:25.165 "uuid": "d37d0702-7b04-4f3b-868a-f1ea94ae04c8", 00:17:25.165 "is_configured": true, 00:17:25.165 "data_offset": 256, 00:17:25.165 "data_size": 7936 00:17:25.165 } 00:17:25.165 ] 00:17:25.165 }' 00:17:25.165 13:27:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:25.165 13:27:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:25.424 13:27:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:17:25.425 13:27:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:25.425 13:27:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:25.425 13:27:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.425 13:27:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:25.425 13:27:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:25.425 13:27:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.425 13:27:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:25.425 13:27:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:25.425 13:27:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:17:25.425 13:27:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.425 13:27:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:25.425 [2024-11-17 13:27:14.635650] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:25.425 [2024-11-17 13:27:14.635746] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:25.685 [2024-11-17 13:27:14.731703] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:25.685 [2024-11-17 13:27:14.731752] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:25.685 [2024-11-17 13:27:14.731762] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:17:25.685 13:27:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.685 13:27:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:25.685 13:27:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:25.685 13:27:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:25.685 13:27:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:17:25.685 13:27:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.685 13:27:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:25.685 13:27:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.685 13:27:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:17:25.685 13:27:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:17:25.685 13:27:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:17:25.685 13:27:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@326 -- # killprocess 87051 00:17:25.685 13:27:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@954 -- # '[' -z 87051 ']' 00:17:25.685 13:27:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@958 -- # kill -0 87051 00:17:25.685 13:27:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@959 -- # uname 00:17:25.685 13:27:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:25.685 13:27:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87051 00:17:25.685 13:27:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:25.685 killing process with pid 87051 00:17:25.685 13:27:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:25.685 13:27:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87051' 00:17:25.685 13:27:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@973 -- # kill 87051 00:17:25.685 [2024-11-17 13:27:14.830833] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:25.685 13:27:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@978 -- # wait 87051 00:17:25.685 [2024-11-17 13:27:14.846219] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:27.066 13:27:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@328 -- # return 0 00:17:27.066 00:17:27.066 real 0m4.957s 00:17:27.066 user 0m7.161s 00:17:27.066 sys 0m0.882s 00:17:27.066 13:27:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:27.066 13:27:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:27.066 ************************************ 00:17:27.066 END TEST raid_state_function_test_sb_md_separate 00:17:27.066 ************************************ 00:17:27.066 13:27:15 bdev_raid -- bdev/bdev_raid.sh@1005 -- # run_test raid_superblock_test_md_separate raid_superblock_test raid1 2 00:17:27.066 13:27:15 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:17:27.066 13:27:15 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:27.066 13:27:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:27.066 ************************************ 00:17:27.066 START TEST raid_superblock_test_md_separate 00:17:27.066 ************************************ 00:17:27.066 13:27:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:17:27.066 13:27:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:17:27.066 13:27:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:17:27.066 13:27:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:17:27.066 13:27:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:17:27.066 13:27:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:17:27.066 13:27:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:17:27.066 13:27:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:17:27.066 13:27:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:17:27.066 13:27:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:17:27.066 13:27:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@399 -- # local strip_size 00:17:27.066 13:27:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:17:27.066 13:27:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:17:27.066 13:27:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:17:27.066 13:27:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:17:27.067 13:27:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:17:27.067 13:27:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # raid_pid=87298 00:17:27.067 13:27:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:17:27.067 13:27:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@413 -- # waitforlisten 87298 00:17:27.067 13:27:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@835 -- # '[' -z 87298 ']' 00:17:27.067 13:27:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:27.067 13:27:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:27.067 13:27:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:27.067 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:27.067 13:27:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:27.067 13:27:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:27.067 [2024-11-17 13:27:16.044094] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:17:27.067 [2024-11-17 13:27:16.044282] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87298 ] 00:17:27.067 [2024-11-17 13:27:16.217961] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:27.327 [2024-11-17 13:27:16.328576] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:27.327 [2024-11-17 13:27:16.520770] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:27.327 [2024-11-17 13:27:16.520844] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:27.897 13:27:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:27.897 13:27:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@868 -- # return 0 00:17:27.897 13:27:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:17:27.897 13:27:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:27.897 13:27:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:17:27.897 13:27:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:17:27.897 13:27:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:27.897 13:27:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:27.897 13:27:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:27.897 13:27:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:27.897 13:27:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc1 00:17:27.897 13:27:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.897 13:27:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:27.897 malloc1 00:17:27.897 13:27:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.897 13:27:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:27.897 13:27:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.897 13:27:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:27.897 [2024-11-17 13:27:16.914598] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:27.897 [2024-11-17 13:27:16.914766] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:27.897 [2024-11-17 13:27:16.914808] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:27.897 [2024-11-17 13:27:16.914837] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:27.897 [2024-11-17 13:27:16.916691] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:27.897 [2024-11-17 13:27:16.916756] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:27.897 pt1 00:17:27.897 13:27:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.897 13:27:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:27.897 13:27:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:27.897 13:27:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:17:27.897 13:27:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:17:27.897 13:27:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:27.897 13:27:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:27.897 13:27:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:27.897 13:27:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:27.897 13:27:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc2 00:17:27.897 13:27:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.897 13:27:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:27.897 malloc2 00:17:27.897 13:27:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.897 13:27:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:27.897 13:27:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.897 13:27:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:27.897 [2024-11-17 13:27:16.973149] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:27.897 [2024-11-17 13:27:16.973291] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:27.897 [2024-11-17 13:27:16.973331] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:27.897 [2024-11-17 13:27:16.973359] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:27.897 [2024-11-17 13:27:16.975140] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:27.897 [2024-11-17 13:27:16.975215] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:27.897 pt2 00:17:27.897 13:27:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.897 13:27:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:27.897 13:27:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:27.897 13:27:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:17:27.897 13:27:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.897 13:27:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:27.897 [2024-11-17 13:27:16.985151] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:27.897 [2024-11-17 13:27:16.986857] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:27.897 [2024-11-17 13:27:16.987031] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:27.897 [2024-11-17 13:27:16.987045] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:27.897 [2024-11-17 13:27:16.987118] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:17:27.897 [2024-11-17 13:27:16.987247] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:27.898 [2024-11-17 13:27:16.987259] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:27.898 [2024-11-17 13:27:16.987366] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:27.898 13:27:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.898 13:27:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:27.898 13:27:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:27.898 13:27:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:27.898 13:27:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:27.898 13:27:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:27.898 13:27:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:27.898 13:27:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:27.898 13:27:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:27.898 13:27:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:27.898 13:27:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:27.898 13:27:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:27.898 13:27:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:27.898 13:27:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.898 13:27:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:27.898 13:27:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.898 13:27:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:27.898 "name": "raid_bdev1", 00:17:27.898 "uuid": "e6c99854-4997-4fb5-8576-cfc3ce4be65e", 00:17:27.898 "strip_size_kb": 0, 00:17:27.898 "state": "online", 00:17:27.898 "raid_level": "raid1", 00:17:27.898 "superblock": true, 00:17:27.898 "num_base_bdevs": 2, 00:17:27.898 "num_base_bdevs_discovered": 2, 00:17:27.898 "num_base_bdevs_operational": 2, 00:17:27.898 "base_bdevs_list": [ 00:17:27.898 { 00:17:27.898 "name": "pt1", 00:17:27.898 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:27.898 "is_configured": true, 00:17:27.898 "data_offset": 256, 00:17:27.898 "data_size": 7936 00:17:27.898 }, 00:17:27.898 { 00:17:27.898 "name": "pt2", 00:17:27.898 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:27.898 "is_configured": true, 00:17:27.898 "data_offset": 256, 00:17:27.898 "data_size": 7936 00:17:27.898 } 00:17:27.898 ] 00:17:27.898 }' 00:17:27.898 13:27:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:27.898 13:27:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:28.468 13:27:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:17:28.468 13:27:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:28.468 13:27:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:28.468 13:27:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:28.468 13:27:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:17:28.468 13:27:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:28.468 13:27:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:28.468 13:27:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:28.468 13:27:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.468 13:27:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:28.468 [2024-11-17 13:27:17.468644] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:28.468 13:27:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.468 13:27:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:28.468 "name": "raid_bdev1", 00:17:28.468 "aliases": [ 00:17:28.468 "e6c99854-4997-4fb5-8576-cfc3ce4be65e" 00:17:28.468 ], 00:17:28.468 "product_name": "Raid Volume", 00:17:28.468 "block_size": 4096, 00:17:28.468 "num_blocks": 7936, 00:17:28.468 "uuid": "e6c99854-4997-4fb5-8576-cfc3ce4be65e", 00:17:28.468 "md_size": 32, 00:17:28.468 "md_interleave": false, 00:17:28.468 "dif_type": 0, 00:17:28.468 "assigned_rate_limits": { 00:17:28.468 "rw_ios_per_sec": 0, 00:17:28.468 "rw_mbytes_per_sec": 0, 00:17:28.468 "r_mbytes_per_sec": 0, 00:17:28.468 "w_mbytes_per_sec": 0 00:17:28.468 }, 00:17:28.468 "claimed": false, 00:17:28.468 "zoned": false, 00:17:28.468 "supported_io_types": { 00:17:28.468 "read": true, 00:17:28.468 "write": true, 00:17:28.468 "unmap": false, 00:17:28.468 "flush": false, 00:17:28.468 "reset": true, 00:17:28.468 "nvme_admin": false, 00:17:28.468 "nvme_io": false, 00:17:28.468 "nvme_io_md": false, 00:17:28.468 "write_zeroes": true, 00:17:28.468 "zcopy": false, 00:17:28.468 "get_zone_info": false, 00:17:28.468 "zone_management": false, 00:17:28.468 "zone_append": false, 00:17:28.468 "compare": false, 00:17:28.468 "compare_and_write": false, 00:17:28.468 "abort": false, 00:17:28.468 "seek_hole": false, 00:17:28.468 "seek_data": false, 00:17:28.468 "copy": false, 00:17:28.468 "nvme_iov_md": false 00:17:28.468 }, 00:17:28.468 "memory_domains": [ 00:17:28.468 { 00:17:28.468 "dma_device_id": "system", 00:17:28.468 "dma_device_type": 1 00:17:28.468 }, 00:17:28.468 { 00:17:28.468 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:28.468 "dma_device_type": 2 00:17:28.468 }, 00:17:28.468 { 00:17:28.468 "dma_device_id": "system", 00:17:28.468 "dma_device_type": 1 00:17:28.468 }, 00:17:28.468 { 00:17:28.468 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:28.468 "dma_device_type": 2 00:17:28.468 } 00:17:28.468 ], 00:17:28.468 "driver_specific": { 00:17:28.468 "raid": { 00:17:28.469 "uuid": "e6c99854-4997-4fb5-8576-cfc3ce4be65e", 00:17:28.469 "strip_size_kb": 0, 00:17:28.469 "state": "online", 00:17:28.469 "raid_level": "raid1", 00:17:28.469 "superblock": true, 00:17:28.469 "num_base_bdevs": 2, 00:17:28.469 "num_base_bdevs_discovered": 2, 00:17:28.469 "num_base_bdevs_operational": 2, 00:17:28.469 "base_bdevs_list": [ 00:17:28.469 { 00:17:28.469 "name": "pt1", 00:17:28.469 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:28.469 "is_configured": true, 00:17:28.469 "data_offset": 256, 00:17:28.469 "data_size": 7936 00:17:28.469 }, 00:17:28.469 { 00:17:28.469 "name": "pt2", 00:17:28.469 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:28.469 "is_configured": true, 00:17:28.469 "data_offset": 256, 00:17:28.469 "data_size": 7936 00:17:28.469 } 00:17:28.469 ] 00:17:28.469 } 00:17:28.469 } 00:17:28.469 }' 00:17:28.469 13:27:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:28.469 13:27:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:28.469 pt2' 00:17:28.469 13:27:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:28.469 13:27:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:17:28.469 13:27:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:28.469 13:27:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:28.469 13:27:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:28.469 13:27:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.469 13:27:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:28.469 13:27:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.469 13:27:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:17:28.469 13:27:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:17:28.469 13:27:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:28.469 13:27:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:28.469 13:27:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:28.469 13:27:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.469 13:27:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:28.469 13:27:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.469 13:27:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:17:28.469 13:27:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:17:28.469 13:27:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:28.469 13:27:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:17:28.469 13:27:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.469 13:27:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:28.469 [2024-11-17 13:27:17.656276] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:28.469 13:27:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.469 13:27:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=e6c99854-4997-4fb5-8576-cfc3ce4be65e 00:17:28.469 13:27:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@436 -- # '[' -z e6c99854-4997-4fb5-8576-cfc3ce4be65e ']' 00:17:28.729 13:27:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:28.729 13:27:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.729 13:27:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:28.729 [2024-11-17 13:27:17.699934] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:28.729 [2024-11-17 13:27:17.699959] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:28.729 [2024-11-17 13:27:17.700045] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:28.729 [2024-11-17 13:27:17.700100] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:28.729 [2024-11-17 13:27:17.700111] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:28.729 13:27:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.729 13:27:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:28.729 13:27:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.729 13:27:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:17:28.729 13:27:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:28.729 13:27:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.729 13:27:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:17:28.729 13:27:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:17:28.729 13:27:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:28.730 13:27:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:17:28.730 13:27:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.730 13:27:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:28.730 13:27:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.730 13:27:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:28.730 13:27:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:17:28.730 13:27:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.730 13:27:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:28.730 13:27:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.730 13:27:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:17:28.730 13:27:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.730 13:27:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:28.730 13:27:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:28.730 13:27:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.730 13:27:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:17:28.730 13:27:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:28.730 13:27:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@652 -- # local es=0 00:17:28.730 13:27:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:28.730 13:27:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:28.730 13:27:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:28.730 13:27:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:28.730 13:27:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:28.730 13:27:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:28.730 13:27:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.730 13:27:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:28.730 [2024-11-17 13:27:17.843686] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:28.730 [2024-11-17 13:27:17.845474] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:28.730 [2024-11-17 13:27:17.845605] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:17:28.730 [2024-11-17 13:27:17.845660] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:17:28.730 [2024-11-17 13:27:17.845673] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:28.730 [2024-11-17 13:27:17.845682] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:17:28.730 request: 00:17:28.730 { 00:17:28.730 "name": "raid_bdev1", 00:17:28.730 "raid_level": "raid1", 00:17:28.730 "base_bdevs": [ 00:17:28.730 "malloc1", 00:17:28.730 "malloc2" 00:17:28.730 ], 00:17:28.730 "superblock": false, 00:17:28.730 "method": "bdev_raid_create", 00:17:28.730 "req_id": 1 00:17:28.730 } 00:17:28.730 Got JSON-RPC error response 00:17:28.730 response: 00:17:28.730 { 00:17:28.730 "code": -17, 00:17:28.730 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:28.730 } 00:17:28.730 13:27:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:28.730 13:27:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@655 -- # es=1 00:17:28.730 13:27:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:28.730 13:27:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:28.730 13:27:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:28.730 13:27:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:28.730 13:27:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:17:28.730 13:27:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.730 13:27:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:28.730 13:27:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.730 13:27:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:17:28.730 13:27:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:17:28.730 13:27:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:28.730 13:27:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.730 13:27:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:28.730 [2024-11-17 13:27:17.911551] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:28.730 [2024-11-17 13:27:17.911649] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:28.730 [2024-11-17 13:27:17.911683] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:28.730 [2024-11-17 13:27:17.911716] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:28.730 [2024-11-17 13:27:17.913662] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:28.730 [2024-11-17 13:27:17.913742] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:28.730 [2024-11-17 13:27:17.913810] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:28.730 [2024-11-17 13:27:17.913882] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:28.730 pt1 00:17:28.730 13:27:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.730 13:27:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:17:28.730 13:27:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:28.730 13:27:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:28.730 13:27:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:28.730 13:27:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:28.730 13:27:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:28.730 13:27:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:28.730 13:27:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:28.730 13:27:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:28.730 13:27:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:28.730 13:27:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:28.730 13:27:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:28.730 13:27:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.730 13:27:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:28.730 13:27:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.002 13:27:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:29.002 "name": "raid_bdev1", 00:17:29.002 "uuid": "e6c99854-4997-4fb5-8576-cfc3ce4be65e", 00:17:29.002 "strip_size_kb": 0, 00:17:29.002 "state": "configuring", 00:17:29.002 "raid_level": "raid1", 00:17:29.002 "superblock": true, 00:17:29.002 "num_base_bdevs": 2, 00:17:29.002 "num_base_bdevs_discovered": 1, 00:17:29.002 "num_base_bdevs_operational": 2, 00:17:29.002 "base_bdevs_list": [ 00:17:29.002 { 00:17:29.002 "name": "pt1", 00:17:29.002 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:29.002 "is_configured": true, 00:17:29.002 "data_offset": 256, 00:17:29.002 "data_size": 7936 00:17:29.002 }, 00:17:29.002 { 00:17:29.002 "name": null, 00:17:29.002 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:29.002 "is_configured": false, 00:17:29.002 "data_offset": 256, 00:17:29.002 "data_size": 7936 00:17:29.002 } 00:17:29.002 ] 00:17:29.002 }' 00:17:29.002 13:27:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:29.002 13:27:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:29.277 13:27:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:17:29.277 13:27:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:17:29.277 13:27:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:29.277 13:27:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:29.277 13:27:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.277 13:27:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:29.277 [2024-11-17 13:27:18.358838] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:29.277 [2024-11-17 13:27:18.358977] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:29.277 [2024-11-17 13:27:18.359004] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:17:29.277 [2024-11-17 13:27:18.359017] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:29.277 [2024-11-17 13:27:18.359282] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:29.277 [2024-11-17 13:27:18.359303] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:29.277 [2024-11-17 13:27:18.359353] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:29.277 [2024-11-17 13:27:18.359376] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:29.277 [2024-11-17 13:27:18.359499] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:29.277 [2024-11-17 13:27:18.359510] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:29.277 [2024-11-17 13:27:18.359580] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:17:29.277 [2024-11-17 13:27:18.359715] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:29.277 [2024-11-17 13:27:18.359734] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:17:29.278 [2024-11-17 13:27:18.359838] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:29.278 pt2 00:17:29.278 13:27:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.278 13:27:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:17:29.278 13:27:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:29.278 13:27:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:29.278 13:27:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:29.278 13:27:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:29.278 13:27:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:29.278 13:27:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:29.278 13:27:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:29.278 13:27:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:29.278 13:27:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:29.278 13:27:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:29.278 13:27:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:29.278 13:27:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:29.278 13:27:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:29.278 13:27:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.278 13:27:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:29.278 13:27:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.278 13:27:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:29.278 "name": "raid_bdev1", 00:17:29.278 "uuid": "e6c99854-4997-4fb5-8576-cfc3ce4be65e", 00:17:29.278 "strip_size_kb": 0, 00:17:29.278 "state": "online", 00:17:29.278 "raid_level": "raid1", 00:17:29.278 "superblock": true, 00:17:29.278 "num_base_bdevs": 2, 00:17:29.278 "num_base_bdevs_discovered": 2, 00:17:29.278 "num_base_bdevs_operational": 2, 00:17:29.278 "base_bdevs_list": [ 00:17:29.278 { 00:17:29.278 "name": "pt1", 00:17:29.278 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:29.278 "is_configured": true, 00:17:29.278 "data_offset": 256, 00:17:29.278 "data_size": 7936 00:17:29.278 }, 00:17:29.278 { 00:17:29.278 "name": "pt2", 00:17:29.278 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:29.278 "is_configured": true, 00:17:29.278 "data_offset": 256, 00:17:29.278 "data_size": 7936 00:17:29.278 } 00:17:29.278 ] 00:17:29.278 }' 00:17:29.278 13:27:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:29.278 13:27:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:29.538 13:27:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:17:29.538 13:27:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:29.538 13:27:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:29.538 13:27:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:29.538 13:27:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:17:29.538 13:27:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:29.801 13:27:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:29.801 13:27:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:29.801 13:27:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.801 13:27:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:29.801 [2024-11-17 13:27:18.770701] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:29.801 13:27:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.801 13:27:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:29.801 "name": "raid_bdev1", 00:17:29.801 "aliases": [ 00:17:29.801 "e6c99854-4997-4fb5-8576-cfc3ce4be65e" 00:17:29.801 ], 00:17:29.801 "product_name": "Raid Volume", 00:17:29.801 "block_size": 4096, 00:17:29.801 "num_blocks": 7936, 00:17:29.801 "uuid": "e6c99854-4997-4fb5-8576-cfc3ce4be65e", 00:17:29.801 "md_size": 32, 00:17:29.801 "md_interleave": false, 00:17:29.801 "dif_type": 0, 00:17:29.801 "assigned_rate_limits": { 00:17:29.801 "rw_ios_per_sec": 0, 00:17:29.801 "rw_mbytes_per_sec": 0, 00:17:29.801 "r_mbytes_per_sec": 0, 00:17:29.801 "w_mbytes_per_sec": 0 00:17:29.801 }, 00:17:29.801 "claimed": false, 00:17:29.801 "zoned": false, 00:17:29.801 "supported_io_types": { 00:17:29.801 "read": true, 00:17:29.801 "write": true, 00:17:29.801 "unmap": false, 00:17:29.801 "flush": false, 00:17:29.801 "reset": true, 00:17:29.801 "nvme_admin": false, 00:17:29.801 "nvme_io": false, 00:17:29.801 "nvme_io_md": false, 00:17:29.801 "write_zeroes": true, 00:17:29.801 "zcopy": false, 00:17:29.801 "get_zone_info": false, 00:17:29.801 "zone_management": false, 00:17:29.801 "zone_append": false, 00:17:29.801 "compare": false, 00:17:29.801 "compare_and_write": false, 00:17:29.801 "abort": false, 00:17:29.801 "seek_hole": false, 00:17:29.801 "seek_data": false, 00:17:29.801 "copy": false, 00:17:29.801 "nvme_iov_md": false 00:17:29.801 }, 00:17:29.801 "memory_domains": [ 00:17:29.801 { 00:17:29.801 "dma_device_id": "system", 00:17:29.801 "dma_device_type": 1 00:17:29.801 }, 00:17:29.801 { 00:17:29.801 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:29.801 "dma_device_type": 2 00:17:29.801 }, 00:17:29.801 { 00:17:29.801 "dma_device_id": "system", 00:17:29.801 "dma_device_type": 1 00:17:29.801 }, 00:17:29.801 { 00:17:29.801 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:29.801 "dma_device_type": 2 00:17:29.801 } 00:17:29.801 ], 00:17:29.801 "driver_specific": { 00:17:29.801 "raid": { 00:17:29.801 "uuid": "e6c99854-4997-4fb5-8576-cfc3ce4be65e", 00:17:29.801 "strip_size_kb": 0, 00:17:29.801 "state": "online", 00:17:29.801 "raid_level": "raid1", 00:17:29.801 "superblock": true, 00:17:29.801 "num_base_bdevs": 2, 00:17:29.801 "num_base_bdevs_discovered": 2, 00:17:29.801 "num_base_bdevs_operational": 2, 00:17:29.801 "base_bdevs_list": [ 00:17:29.801 { 00:17:29.801 "name": "pt1", 00:17:29.801 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:29.801 "is_configured": true, 00:17:29.801 "data_offset": 256, 00:17:29.801 "data_size": 7936 00:17:29.801 }, 00:17:29.801 { 00:17:29.801 "name": "pt2", 00:17:29.801 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:29.801 "is_configured": true, 00:17:29.801 "data_offset": 256, 00:17:29.801 "data_size": 7936 00:17:29.801 } 00:17:29.801 ] 00:17:29.801 } 00:17:29.801 } 00:17:29.802 }' 00:17:29.802 13:27:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:29.802 13:27:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:29.802 pt2' 00:17:29.802 13:27:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:29.802 13:27:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:17:29.802 13:27:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:29.802 13:27:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:29.802 13:27:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:29.802 13:27:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.802 13:27:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:29.802 13:27:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.802 13:27:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:17:29.802 13:27:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:17:29.802 13:27:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:29.802 13:27:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:29.802 13:27:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.802 13:27:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:29.802 13:27:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:29.802 13:27:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.802 13:27:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:17:29.802 13:27:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:17:29.802 13:27:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:17:29.802 13:27:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:29.802 13:27:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.802 13:27:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:29.802 [2024-11-17 13:27:18.998262] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:29.802 13:27:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.062 13:27:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # '[' e6c99854-4997-4fb5-8576-cfc3ce4be65e '!=' e6c99854-4997-4fb5-8576-cfc3ce4be65e ']' 00:17:30.062 13:27:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:17:30.062 13:27:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:30.062 13:27:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:17:30.062 13:27:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:17:30.062 13:27:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.062 13:27:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:30.062 [2024-11-17 13:27:19.045983] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:17:30.062 13:27:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.062 13:27:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:30.062 13:27:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:30.062 13:27:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:30.062 13:27:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:30.062 13:27:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:30.062 13:27:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:30.062 13:27:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:30.062 13:27:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:30.062 13:27:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:30.062 13:27:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:30.062 13:27:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:30.062 13:27:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:30.062 13:27:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.062 13:27:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:30.062 13:27:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.062 13:27:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:30.062 "name": "raid_bdev1", 00:17:30.062 "uuid": "e6c99854-4997-4fb5-8576-cfc3ce4be65e", 00:17:30.062 "strip_size_kb": 0, 00:17:30.062 "state": "online", 00:17:30.062 "raid_level": "raid1", 00:17:30.062 "superblock": true, 00:17:30.062 "num_base_bdevs": 2, 00:17:30.062 "num_base_bdevs_discovered": 1, 00:17:30.062 "num_base_bdevs_operational": 1, 00:17:30.062 "base_bdevs_list": [ 00:17:30.062 { 00:17:30.062 "name": null, 00:17:30.062 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:30.062 "is_configured": false, 00:17:30.062 "data_offset": 0, 00:17:30.062 "data_size": 7936 00:17:30.062 }, 00:17:30.062 { 00:17:30.062 "name": "pt2", 00:17:30.062 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:30.062 "is_configured": true, 00:17:30.062 "data_offset": 256, 00:17:30.062 "data_size": 7936 00:17:30.062 } 00:17:30.062 ] 00:17:30.062 }' 00:17:30.062 13:27:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:30.062 13:27:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:30.321 13:27:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:30.321 13:27:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.321 13:27:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:30.321 [2024-11-17 13:27:19.505191] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:30.321 [2024-11-17 13:27:19.505296] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:30.321 [2024-11-17 13:27:19.505394] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:30.321 [2024-11-17 13:27:19.505515] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:30.321 [2024-11-17 13:27:19.505571] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:17:30.321 13:27:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.321 13:27:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:30.321 13:27:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:17:30.321 13:27:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.321 13:27:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:30.321 13:27:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.582 13:27:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:17:30.582 13:27:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:17:30.582 13:27:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:17:30.582 13:27:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:30.582 13:27:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:17:30.582 13:27:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.582 13:27:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:30.582 13:27:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.582 13:27:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:17:30.582 13:27:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:30.582 13:27:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:17:30.582 13:27:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:17:30.582 13:27:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@519 -- # i=1 00:17:30.582 13:27:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:30.582 13:27:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.582 13:27:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:30.582 [2024-11-17 13:27:19.581089] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:30.582 [2024-11-17 13:27:19.581160] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:30.582 [2024-11-17 13:27:19.581178] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:17:30.582 [2024-11-17 13:27:19.581189] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:30.582 [2024-11-17 13:27:19.583431] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:30.582 [2024-11-17 13:27:19.583471] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:30.582 [2024-11-17 13:27:19.583521] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:30.582 [2024-11-17 13:27:19.583576] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:30.582 [2024-11-17 13:27:19.583677] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:17:30.582 [2024-11-17 13:27:19.583689] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:30.582 [2024-11-17 13:27:19.583763] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:30.582 [2024-11-17 13:27:19.583863] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:17:30.582 [2024-11-17 13:27:19.583870] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:17:30.582 [2024-11-17 13:27:19.583965] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:30.582 pt2 00:17:30.582 13:27:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.582 13:27:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:30.582 13:27:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:30.582 13:27:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:30.582 13:27:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:30.582 13:27:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:30.582 13:27:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:30.582 13:27:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:30.582 13:27:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:30.582 13:27:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:30.582 13:27:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:30.582 13:27:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:30.582 13:27:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:30.582 13:27:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.582 13:27:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:30.582 13:27:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.582 13:27:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:30.582 "name": "raid_bdev1", 00:17:30.582 "uuid": "e6c99854-4997-4fb5-8576-cfc3ce4be65e", 00:17:30.582 "strip_size_kb": 0, 00:17:30.582 "state": "online", 00:17:30.582 "raid_level": "raid1", 00:17:30.582 "superblock": true, 00:17:30.582 "num_base_bdevs": 2, 00:17:30.582 "num_base_bdevs_discovered": 1, 00:17:30.582 "num_base_bdevs_operational": 1, 00:17:30.582 "base_bdevs_list": [ 00:17:30.582 { 00:17:30.582 "name": null, 00:17:30.582 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:30.582 "is_configured": false, 00:17:30.582 "data_offset": 256, 00:17:30.582 "data_size": 7936 00:17:30.582 }, 00:17:30.582 { 00:17:30.582 "name": "pt2", 00:17:30.582 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:30.582 "is_configured": true, 00:17:30.582 "data_offset": 256, 00:17:30.582 "data_size": 7936 00:17:30.582 } 00:17:30.582 ] 00:17:30.582 }' 00:17:30.582 13:27:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:30.582 13:27:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:30.843 13:27:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:30.843 13:27:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.843 13:27:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:30.843 [2024-11-17 13:27:20.036291] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:30.843 [2024-11-17 13:27:20.036389] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:30.843 [2024-11-17 13:27:20.036479] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:30.843 [2024-11-17 13:27:20.036593] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:30.843 [2024-11-17 13:27:20.036647] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:17:30.843 13:27:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.843 13:27:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:30.843 13:27:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.843 13:27:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:30.843 13:27:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:17:30.843 13:27:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.103 13:27:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:17:31.103 13:27:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:17:31.103 13:27:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:17:31.103 13:27:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:31.103 13:27:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.103 13:27:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:31.103 [2024-11-17 13:27:20.100199] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:31.103 [2024-11-17 13:27:20.100305] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:31.103 [2024-11-17 13:27:20.100343] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:17:31.103 [2024-11-17 13:27:20.100370] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:31.103 [2024-11-17 13:27:20.102336] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:31.103 [2024-11-17 13:27:20.102401] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:31.103 [2024-11-17 13:27:20.102475] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:31.103 [2024-11-17 13:27:20.102536] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:31.103 [2024-11-17 13:27:20.102739] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:17:31.103 [2024-11-17 13:27:20.102790] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:31.104 [2024-11-17 13:27:20.102829] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:17:31.104 [2024-11-17 13:27:20.102962] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:31.104 [2024-11-17 13:27:20.103071] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:17:31.104 [2024-11-17 13:27:20.103106] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:31.104 [2024-11-17 13:27:20.103217] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:31.104 [2024-11-17 13:27:20.103361] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:17:31.104 [2024-11-17 13:27:20.103398] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:17:31.104 [2024-11-17 13:27:20.103569] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:31.104 pt1 00:17:31.104 13:27:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.104 13:27:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:17:31.104 13:27:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:31.104 13:27:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:31.104 13:27:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:31.104 13:27:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:31.104 13:27:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:31.104 13:27:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:31.104 13:27:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:31.104 13:27:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:31.104 13:27:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:31.104 13:27:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:31.104 13:27:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:31.104 13:27:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:31.104 13:27:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.104 13:27:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:31.104 13:27:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.104 13:27:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:31.104 "name": "raid_bdev1", 00:17:31.104 "uuid": "e6c99854-4997-4fb5-8576-cfc3ce4be65e", 00:17:31.104 "strip_size_kb": 0, 00:17:31.104 "state": "online", 00:17:31.104 "raid_level": "raid1", 00:17:31.104 "superblock": true, 00:17:31.104 "num_base_bdevs": 2, 00:17:31.104 "num_base_bdevs_discovered": 1, 00:17:31.104 "num_base_bdevs_operational": 1, 00:17:31.104 "base_bdevs_list": [ 00:17:31.104 { 00:17:31.104 "name": null, 00:17:31.104 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:31.104 "is_configured": false, 00:17:31.104 "data_offset": 256, 00:17:31.104 "data_size": 7936 00:17:31.104 }, 00:17:31.104 { 00:17:31.104 "name": "pt2", 00:17:31.104 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:31.104 "is_configured": true, 00:17:31.104 "data_offset": 256, 00:17:31.104 "data_size": 7936 00:17:31.104 } 00:17:31.104 ] 00:17:31.104 }' 00:17:31.104 13:27:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:31.104 13:27:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:31.364 13:27:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:17:31.364 13:27:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:17:31.364 13:27:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.364 13:27:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:31.624 13:27:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.624 13:27:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:17:31.624 13:27:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:31.624 13:27:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.624 13:27:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:31.624 13:27:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:17:31.624 [2024-11-17 13:27:20.627549] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:31.624 13:27:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.624 13:27:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # '[' e6c99854-4997-4fb5-8576-cfc3ce4be65e '!=' e6c99854-4997-4fb5-8576-cfc3ce4be65e ']' 00:17:31.624 13:27:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@563 -- # killprocess 87298 00:17:31.624 13:27:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@954 -- # '[' -z 87298 ']' 00:17:31.624 13:27:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@958 -- # kill -0 87298 00:17:31.624 13:27:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@959 -- # uname 00:17:31.624 13:27:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:31.624 13:27:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87298 00:17:31.624 13:27:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:31.624 13:27:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:31.624 killing process with pid 87298 00:17:31.624 13:27:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87298' 00:17:31.624 13:27:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@973 -- # kill 87298 00:17:31.624 [2024-11-17 13:27:20.712145] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:31.624 [2024-11-17 13:27:20.712272] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:31.624 [2024-11-17 13:27:20.712323] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:31.624 13:27:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@978 -- # wait 87298 00:17:31.624 [2024-11-17 13:27:20.712340] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:17:31.885 [2024-11-17 13:27:20.921708] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:32.826 13:27:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@565 -- # return 0 00:17:32.826 00:17:32.826 real 0m5.995s 00:17:32.826 user 0m9.065s 00:17:32.826 sys 0m1.132s 00:17:32.826 13:27:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:32.826 ************************************ 00:17:32.826 END TEST raid_superblock_test_md_separate 00:17:32.826 ************************************ 00:17:32.826 13:27:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:32.826 13:27:22 bdev_raid -- bdev/bdev_raid.sh@1006 -- # '[' true = true ']' 00:17:32.826 13:27:22 bdev_raid -- bdev/bdev_raid.sh@1007 -- # run_test raid_rebuild_test_sb_md_separate raid_rebuild_test raid1 2 true false true 00:17:32.826 13:27:22 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:17:32.826 13:27:22 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:32.826 13:27:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:32.826 ************************************ 00:17:32.826 START TEST raid_rebuild_test_sb_md_separate 00:17:32.826 ************************************ 00:17:32.826 13:27:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:17:32.826 13:27:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:17:32.826 13:27:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:17:32.826 13:27:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:17:32.826 13:27:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:17:32.826 13:27:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # local verify=true 00:17:32.826 13:27:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:17:32.826 13:27:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:32.826 13:27:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:17:32.826 13:27:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:32.826 13:27:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:32.826 13:27:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:17:32.826 13:27:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:32.826 13:27:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:32.826 13:27:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:32.826 13:27:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:17:32.826 13:27:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:17:32.826 13:27:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # local strip_size 00:17:32.826 13:27:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@577 -- # local create_arg 00:17:32.826 13:27:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:17:32.826 13:27:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@579 -- # local data_offset 00:17:32.826 13:27:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:17:32.826 13:27:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:17:32.826 13:27:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:17:32.826 13:27:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:17:32.826 13:27:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@597 -- # raid_pid=87626 00:17:32.826 13:27:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:32.826 13:27:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@598 -- # waitforlisten 87626 00:17:32.826 13:27:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@835 -- # '[' -z 87626 ']' 00:17:32.826 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:32.826 13:27:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:32.826 13:27:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:32.826 13:27:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:32.826 13:27:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:32.826 13:27:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:33.087 [2024-11-17 13:27:22.126583] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:17:33.087 I/O size of 3145728 is greater than zero copy threshold (65536). 00:17:33.087 Zero copy mechanism will not be used. 00:17:33.087 [2024-11-17 13:27:22.126794] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87626 ] 00:17:33.087 [2024-11-17 13:27:22.305124] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:33.347 [2024-11-17 13:27:22.407198] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:33.606 [2024-11-17 13:27:22.593250] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:33.606 [2024-11-17 13:27:22.593307] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:33.866 13:27:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:33.866 13:27:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # return 0 00:17:33.866 13:27:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:33.866 13:27:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1_malloc 00:17:33.866 13:27:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.866 13:27:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:33.866 BaseBdev1_malloc 00:17:33.866 13:27:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.866 13:27:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:33.866 13:27:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.866 13:27:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:33.867 [2024-11-17 13:27:22.970165] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:33.867 [2024-11-17 13:27:22.970241] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:33.867 [2024-11-17 13:27:22.970263] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:33.867 [2024-11-17 13:27:22.970274] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:33.867 [2024-11-17 13:27:22.972083] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:33.867 [2024-11-17 13:27:22.972126] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:33.867 BaseBdev1 00:17:33.867 13:27:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.867 13:27:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:33.867 13:27:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2_malloc 00:17:33.867 13:27:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.867 13:27:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:33.867 BaseBdev2_malloc 00:17:33.867 13:27:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.867 13:27:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:33.867 13:27:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.867 13:27:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:33.867 [2024-11-17 13:27:23.024167] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:33.867 [2024-11-17 13:27:23.024241] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:33.867 [2024-11-17 13:27:23.024261] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:33.867 [2024-11-17 13:27:23.024271] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:33.867 [2024-11-17 13:27:23.025956] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:33.867 [2024-11-17 13:27:23.026057] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:33.867 BaseBdev2 00:17:33.867 13:27:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.867 13:27:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b spare_malloc 00:17:33.867 13:27:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.867 13:27:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:34.127 spare_malloc 00:17:34.127 13:27:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.127 13:27:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:34.127 13:27:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.127 13:27:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:34.127 spare_delay 00:17:34.127 13:27:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.127 13:27:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:34.127 13:27:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.127 13:27:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:34.127 [2024-11-17 13:27:23.122002] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:34.127 [2024-11-17 13:27:23.122059] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:34.127 [2024-11-17 13:27:23.122077] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:17:34.127 [2024-11-17 13:27:23.122087] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:34.127 [2024-11-17 13:27:23.123866] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:34.127 [2024-11-17 13:27:23.123967] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:34.127 spare 00:17:34.127 13:27:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.127 13:27:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:17:34.127 13:27:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.127 13:27:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:34.127 [2024-11-17 13:27:23.134021] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:34.127 [2024-11-17 13:27:23.135680] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:34.127 [2024-11-17 13:27:23.135848] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:34.127 [2024-11-17 13:27:23.135862] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:34.127 [2024-11-17 13:27:23.135922] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:17:34.127 [2024-11-17 13:27:23.136041] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:34.127 [2024-11-17 13:27:23.136048] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:34.127 [2024-11-17 13:27:23.136147] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:34.127 13:27:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.127 13:27:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:34.127 13:27:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:34.127 13:27:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:34.127 13:27:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:34.127 13:27:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:34.127 13:27:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:34.127 13:27:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:34.127 13:27:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:34.127 13:27:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:34.127 13:27:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:34.127 13:27:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:34.127 13:27:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:34.127 13:27:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.127 13:27:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:34.127 13:27:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.127 13:27:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:34.127 "name": "raid_bdev1", 00:17:34.127 "uuid": "14510339-2bdb-4d5a-bca7-83132f946334", 00:17:34.127 "strip_size_kb": 0, 00:17:34.127 "state": "online", 00:17:34.127 "raid_level": "raid1", 00:17:34.127 "superblock": true, 00:17:34.127 "num_base_bdevs": 2, 00:17:34.127 "num_base_bdevs_discovered": 2, 00:17:34.127 "num_base_bdevs_operational": 2, 00:17:34.127 "base_bdevs_list": [ 00:17:34.127 { 00:17:34.127 "name": "BaseBdev1", 00:17:34.127 "uuid": "8b1afb18-7b9e-5d1b-9a25-dffe55304614", 00:17:34.127 "is_configured": true, 00:17:34.127 "data_offset": 256, 00:17:34.127 "data_size": 7936 00:17:34.127 }, 00:17:34.127 { 00:17:34.127 "name": "BaseBdev2", 00:17:34.127 "uuid": "ba022b73-a502-5263-84cb-5370ec0b2980", 00:17:34.127 "is_configured": true, 00:17:34.127 "data_offset": 256, 00:17:34.127 "data_size": 7936 00:17:34.127 } 00:17:34.127 ] 00:17:34.127 }' 00:17:34.127 13:27:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:34.127 13:27:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:34.387 13:27:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:34.388 13:27:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:17:34.388 13:27:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.388 13:27:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:34.388 [2024-11-17 13:27:23.593459] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:34.648 13:27:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.648 13:27:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:17:34.648 13:27:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:34.648 13:27:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:34.648 13:27:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.648 13:27:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:34.648 13:27:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.648 13:27:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:17:34.648 13:27:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:17:34.648 13:27:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:17:34.648 13:27:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:17:34.648 13:27:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:17:34.648 13:27:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:34.648 13:27:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:17:34.648 13:27:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:34.648 13:27:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:17:34.648 13:27:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:34.648 13:27:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:17:34.648 13:27:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:34.648 13:27:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:34.648 13:27:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:17:34.648 [2024-11-17 13:27:23.852777] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:34.648 /dev/nbd0 00:17:34.909 13:27:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:34.909 13:27:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:34.909 13:27:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:34.909 13:27:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:17:34.909 13:27:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:34.909 13:27:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:34.909 13:27:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:34.909 13:27:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:17:34.909 13:27:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:34.909 13:27:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:34.909 13:27:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:34.909 1+0 records in 00:17:34.909 1+0 records out 00:17:34.909 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000266118 s, 15.4 MB/s 00:17:34.909 13:27:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:34.909 13:27:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:17:34.909 13:27:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:34.909 13:27:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:34.909 13:27:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:17:34.909 13:27:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:34.909 13:27:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:34.909 13:27:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:17:34.909 13:27:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:17:34.909 13:27:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:17:35.479 7936+0 records in 00:17:35.479 7936+0 records out 00:17:35.479 32505856 bytes (33 MB, 31 MiB) copied, 0.633448 s, 51.3 MB/s 00:17:35.479 13:27:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:17:35.479 13:27:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:35.479 13:27:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:35.479 13:27:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:35.479 13:27:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:17:35.479 13:27:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:35.479 13:27:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:35.743 13:27:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:35.743 [2024-11-17 13:27:24.769296] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:35.743 13:27:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:35.743 13:27:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:35.743 13:27:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:35.743 13:27:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:35.743 13:27:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:35.743 13:27:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:17:35.743 13:27:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:17:35.743 13:27:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:17:35.743 13:27:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.743 13:27:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:35.744 [2024-11-17 13:27:24.786876] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:35.744 13:27:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.744 13:27:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:35.744 13:27:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:35.744 13:27:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:35.744 13:27:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:35.744 13:27:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:35.744 13:27:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:35.744 13:27:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:35.744 13:27:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:35.744 13:27:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:35.744 13:27:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:35.744 13:27:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:35.744 13:27:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:35.744 13:27:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.744 13:27:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:35.744 13:27:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.744 13:27:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:35.744 "name": "raid_bdev1", 00:17:35.744 "uuid": "14510339-2bdb-4d5a-bca7-83132f946334", 00:17:35.744 "strip_size_kb": 0, 00:17:35.744 "state": "online", 00:17:35.744 "raid_level": "raid1", 00:17:35.744 "superblock": true, 00:17:35.744 "num_base_bdevs": 2, 00:17:35.744 "num_base_bdevs_discovered": 1, 00:17:35.744 "num_base_bdevs_operational": 1, 00:17:35.744 "base_bdevs_list": [ 00:17:35.744 { 00:17:35.744 "name": null, 00:17:35.744 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:35.744 "is_configured": false, 00:17:35.744 "data_offset": 0, 00:17:35.744 "data_size": 7936 00:17:35.744 }, 00:17:35.744 { 00:17:35.744 "name": "BaseBdev2", 00:17:35.744 "uuid": "ba022b73-a502-5263-84cb-5370ec0b2980", 00:17:35.744 "is_configured": true, 00:17:35.744 "data_offset": 256, 00:17:35.744 "data_size": 7936 00:17:35.744 } 00:17:35.744 ] 00:17:35.744 }' 00:17:35.744 13:27:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:35.744 13:27:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:36.337 13:27:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:36.337 13:27:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.337 13:27:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:36.337 [2024-11-17 13:27:25.258250] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:36.337 [2024-11-17 13:27:25.271549] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:17:36.337 13:27:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.337 13:27:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:36.337 [2024-11-17 13:27:25.273411] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:37.278 13:27:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:37.278 13:27:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:37.278 13:27:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:37.278 13:27:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:37.278 13:27:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:37.278 13:27:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:37.279 13:27:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:37.279 13:27:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.279 13:27:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:37.279 13:27:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.279 13:27:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:37.279 "name": "raid_bdev1", 00:17:37.279 "uuid": "14510339-2bdb-4d5a-bca7-83132f946334", 00:17:37.279 "strip_size_kb": 0, 00:17:37.279 "state": "online", 00:17:37.279 "raid_level": "raid1", 00:17:37.279 "superblock": true, 00:17:37.279 "num_base_bdevs": 2, 00:17:37.279 "num_base_bdevs_discovered": 2, 00:17:37.279 "num_base_bdevs_operational": 2, 00:17:37.279 "process": { 00:17:37.279 "type": "rebuild", 00:17:37.279 "target": "spare", 00:17:37.279 "progress": { 00:17:37.279 "blocks": 2560, 00:17:37.279 "percent": 32 00:17:37.279 } 00:17:37.279 }, 00:17:37.279 "base_bdevs_list": [ 00:17:37.279 { 00:17:37.279 "name": "spare", 00:17:37.279 "uuid": "ea2b874c-4338-5cca-a2da-97ee36b36cc8", 00:17:37.279 "is_configured": true, 00:17:37.279 "data_offset": 256, 00:17:37.279 "data_size": 7936 00:17:37.279 }, 00:17:37.279 { 00:17:37.279 "name": "BaseBdev2", 00:17:37.279 "uuid": "ba022b73-a502-5263-84cb-5370ec0b2980", 00:17:37.279 "is_configured": true, 00:17:37.279 "data_offset": 256, 00:17:37.279 "data_size": 7936 00:17:37.279 } 00:17:37.279 ] 00:17:37.279 }' 00:17:37.279 13:27:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:37.279 13:27:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:37.279 13:27:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:37.279 13:27:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:37.279 13:27:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:37.279 13:27:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.279 13:27:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:37.279 [2024-11-17 13:27:26.433582] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:37.279 [2024-11-17 13:27:26.478539] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:37.279 [2024-11-17 13:27:26.478596] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:37.279 [2024-11-17 13:27:26.478610] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:37.279 [2024-11-17 13:27:26.478619] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:37.279 13:27:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.279 13:27:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:37.279 13:27:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:37.279 13:27:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:37.279 13:27:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:37.279 13:27:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:37.279 13:27:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:37.279 13:27:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:37.279 13:27:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:37.279 13:27:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:37.279 13:27:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:37.540 13:27:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:37.540 13:27:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:37.540 13:27:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.540 13:27:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:37.540 13:27:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.540 13:27:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:37.540 "name": "raid_bdev1", 00:17:37.540 "uuid": "14510339-2bdb-4d5a-bca7-83132f946334", 00:17:37.540 "strip_size_kb": 0, 00:17:37.540 "state": "online", 00:17:37.540 "raid_level": "raid1", 00:17:37.540 "superblock": true, 00:17:37.540 "num_base_bdevs": 2, 00:17:37.540 "num_base_bdevs_discovered": 1, 00:17:37.540 "num_base_bdevs_operational": 1, 00:17:37.540 "base_bdevs_list": [ 00:17:37.540 { 00:17:37.540 "name": null, 00:17:37.540 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:37.540 "is_configured": false, 00:17:37.540 "data_offset": 0, 00:17:37.540 "data_size": 7936 00:17:37.540 }, 00:17:37.540 { 00:17:37.540 "name": "BaseBdev2", 00:17:37.540 "uuid": "ba022b73-a502-5263-84cb-5370ec0b2980", 00:17:37.540 "is_configured": true, 00:17:37.540 "data_offset": 256, 00:17:37.540 "data_size": 7936 00:17:37.540 } 00:17:37.540 ] 00:17:37.540 }' 00:17:37.540 13:27:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:37.540 13:27:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:37.800 13:27:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:37.800 13:27:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:37.800 13:27:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:37.800 13:27:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:37.800 13:27:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:37.800 13:27:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:37.800 13:27:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:37.800 13:27:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.800 13:27:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:37.800 13:27:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.800 13:27:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:37.800 "name": "raid_bdev1", 00:17:37.800 "uuid": "14510339-2bdb-4d5a-bca7-83132f946334", 00:17:37.800 "strip_size_kb": 0, 00:17:37.800 "state": "online", 00:17:37.800 "raid_level": "raid1", 00:17:37.800 "superblock": true, 00:17:37.800 "num_base_bdevs": 2, 00:17:37.800 "num_base_bdevs_discovered": 1, 00:17:37.800 "num_base_bdevs_operational": 1, 00:17:37.800 "base_bdevs_list": [ 00:17:37.800 { 00:17:37.800 "name": null, 00:17:37.800 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:37.800 "is_configured": false, 00:17:37.800 "data_offset": 0, 00:17:37.800 "data_size": 7936 00:17:37.800 }, 00:17:37.800 { 00:17:37.800 "name": "BaseBdev2", 00:17:37.800 "uuid": "ba022b73-a502-5263-84cb-5370ec0b2980", 00:17:37.800 "is_configured": true, 00:17:37.800 "data_offset": 256, 00:17:37.800 "data_size": 7936 00:17:37.800 } 00:17:37.800 ] 00:17:37.800 }' 00:17:37.800 13:27:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:38.060 13:27:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:38.060 13:27:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:38.060 13:27:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:38.060 13:27:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:38.060 13:27:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.060 13:27:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:38.060 [2024-11-17 13:27:27.108726] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:38.060 [2024-11-17 13:27:27.122273] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:17:38.060 13:27:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.060 13:27:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@663 -- # sleep 1 00:17:38.060 [2024-11-17 13:27:27.124032] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:39.000 13:27:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:39.000 13:27:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:39.000 13:27:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:39.000 13:27:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:39.000 13:27:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:39.000 13:27:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:39.000 13:27:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:39.000 13:27:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.000 13:27:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:39.001 13:27:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.001 13:27:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:39.001 "name": "raid_bdev1", 00:17:39.001 "uuid": "14510339-2bdb-4d5a-bca7-83132f946334", 00:17:39.001 "strip_size_kb": 0, 00:17:39.001 "state": "online", 00:17:39.001 "raid_level": "raid1", 00:17:39.001 "superblock": true, 00:17:39.001 "num_base_bdevs": 2, 00:17:39.001 "num_base_bdevs_discovered": 2, 00:17:39.001 "num_base_bdevs_operational": 2, 00:17:39.001 "process": { 00:17:39.001 "type": "rebuild", 00:17:39.001 "target": "spare", 00:17:39.001 "progress": { 00:17:39.001 "blocks": 2560, 00:17:39.001 "percent": 32 00:17:39.001 } 00:17:39.001 }, 00:17:39.001 "base_bdevs_list": [ 00:17:39.001 { 00:17:39.001 "name": "spare", 00:17:39.001 "uuid": "ea2b874c-4338-5cca-a2da-97ee36b36cc8", 00:17:39.001 "is_configured": true, 00:17:39.001 "data_offset": 256, 00:17:39.001 "data_size": 7936 00:17:39.001 }, 00:17:39.001 { 00:17:39.001 "name": "BaseBdev2", 00:17:39.001 "uuid": "ba022b73-a502-5263-84cb-5370ec0b2980", 00:17:39.001 "is_configured": true, 00:17:39.001 "data_offset": 256, 00:17:39.001 "data_size": 7936 00:17:39.001 } 00:17:39.001 ] 00:17:39.001 }' 00:17:39.001 13:27:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:39.261 13:27:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:39.261 13:27:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:39.261 13:27:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:39.261 13:27:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:17:39.261 13:27:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:17:39.261 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:17:39.261 13:27:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:17:39.261 13:27:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:17:39.261 13:27:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:17:39.261 13:27:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # local timeout=698 00:17:39.261 13:27:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:39.261 13:27:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:39.261 13:27:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:39.261 13:27:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:39.261 13:27:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:39.261 13:27:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:39.261 13:27:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:39.261 13:27:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.261 13:27:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:39.261 13:27:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:39.261 13:27:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.261 13:27:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:39.261 "name": "raid_bdev1", 00:17:39.261 "uuid": "14510339-2bdb-4d5a-bca7-83132f946334", 00:17:39.261 "strip_size_kb": 0, 00:17:39.261 "state": "online", 00:17:39.261 "raid_level": "raid1", 00:17:39.261 "superblock": true, 00:17:39.261 "num_base_bdevs": 2, 00:17:39.261 "num_base_bdevs_discovered": 2, 00:17:39.261 "num_base_bdevs_operational": 2, 00:17:39.261 "process": { 00:17:39.261 "type": "rebuild", 00:17:39.261 "target": "spare", 00:17:39.261 "progress": { 00:17:39.261 "blocks": 2816, 00:17:39.261 "percent": 35 00:17:39.261 } 00:17:39.261 }, 00:17:39.261 "base_bdevs_list": [ 00:17:39.261 { 00:17:39.261 "name": "spare", 00:17:39.261 "uuid": "ea2b874c-4338-5cca-a2da-97ee36b36cc8", 00:17:39.261 "is_configured": true, 00:17:39.261 "data_offset": 256, 00:17:39.261 "data_size": 7936 00:17:39.261 }, 00:17:39.261 { 00:17:39.261 "name": "BaseBdev2", 00:17:39.261 "uuid": "ba022b73-a502-5263-84cb-5370ec0b2980", 00:17:39.261 "is_configured": true, 00:17:39.261 "data_offset": 256, 00:17:39.261 "data_size": 7936 00:17:39.261 } 00:17:39.261 ] 00:17:39.261 }' 00:17:39.261 13:27:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:39.261 13:27:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:39.261 13:27:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:39.261 13:27:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:39.261 13:27:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:40.200 13:27:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:40.200 13:27:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:40.200 13:27:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:40.200 13:27:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:40.200 13:27:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:40.200 13:27:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:40.200 13:27:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:40.200 13:27:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:40.200 13:27:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.200 13:27:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:40.459 13:27:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.459 13:27:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:40.459 "name": "raid_bdev1", 00:17:40.459 "uuid": "14510339-2bdb-4d5a-bca7-83132f946334", 00:17:40.459 "strip_size_kb": 0, 00:17:40.459 "state": "online", 00:17:40.459 "raid_level": "raid1", 00:17:40.459 "superblock": true, 00:17:40.459 "num_base_bdevs": 2, 00:17:40.459 "num_base_bdevs_discovered": 2, 00:17:40.459 "num_base_bdevs_operational": 2, 00:17:40.459 "process": { 00:17:40.459 "type": "rebuild", 00:17:40.459 "target": "spare", 00:17:40.459 "progress": { 00:17:40.459 "blocks": 5632, 00:17:40.459 "percent": 70 00:17:40.459 } 00:17:40.459 }, 00:17:40.459 "base_bdevs_list": [ 00:17:40.459 { 00:17:40.459 "name": "spare", 00:17:40.459 "uuid": "ea2b874c-4338-5cca-a2da-97ee36b36cc8", 00:17:40.459 "is_configured": true, 00:17:40.459 "data_offset": 256, 00:17:40.459 "data_size": 7936 00:17:40.459 }, 00:17:40.459 { 00:17:40.459 "name": "BaseBdev2", 00:17:40.459 "uuid": "ba022b73-a502-5263-84cb-5370ec0b2980", 00:17:40.459 "is_configured": true, 00:17:40.459 "data_offset": 256, 00:17:40.459 "data_size": 7936 00:17:40.459 } 00:17:40.459 ] 00:17:40.459 }' 00:17:40.459 13:27:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:40.459 13:27:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:40.459 13:27:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:40.459 13:27:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:40.459 13:27:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:41.029 [2024-11-17 13:27:30.235457] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:41.029 [2024-11-17 13:27:30.235520] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:41.029 [2024-11-17 13:27:30.235604] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:41.598 13:27:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:41.598 13:27:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:41.598 13:27:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:41.598 13:27:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:41.598 13:27:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:41.598 13:27:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:41.598 13:27:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:41.598 13:27:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.598 13:27:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:41.598 13:27:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:41.598 13:27:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.598 13:27:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:41.598 "name": "raid_bdev1", 00:17:41.598 "uuid": "14510339-2bdb-4d5a-bca7-83132f946334", 00:17:41.598 "strip_size_kb": 0, 00:17:41.598 "state": "online", 00:17:41.598 "raid_level": "raid1", 00:17:41.598 "superblock": true, 00:17:41.598 "num_base_bdevs": 2, 00:17:41.598 "num_base_bdevs_discovered": 2, 00:17:41.598 "num_base_bdevs_operational": 2, 00:17:41.598 "base_bdevs_list": [ 00:17:41.598 { 00:17:41.598 "name": "spare", 00:17:41.598 "uuid": "ea2b874c-4338-5cca-a2da-97ee36b36cc8", 00:17:41.598 "is_configured": true, 00:17:41.598 "data_offset": 256, 00:17:41.598 "data_size": 7936 00:17:41.598 }, 00:17:41.598 { 00:17:41.598 "name": "BaseBdev2", 00:17:41.598 "uuid": "ba022b73-a502-5263-84cb-5370ec0b2980", 00:17:41.598 "is_configured": true, 00:17:41.598 "data_offset": 256, 00:17:41.598 "data_size": 7936 00:17:41.598 } 00:17:41.598 ] 00:17:41.598 }' 00:17:41.598 13:27:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:41.598 13:27:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:41.598 13:27:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:41.598 13:27:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:41.598 13:27:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@709 -- # break 00:17:41.598 13:27:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:41.598 13:27:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:41.598 13:27:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:41.598 13:27:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:41.598 13:27:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:41.598 13:27:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:41.598 13:27:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.598 13:27:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:41.598 13:27:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:41.599 13:27:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.599 13:27:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:41.599 "name": "raid_bdev1", 00:17:41.599 "uuid": "14510339-2bdb-4d5a-bca7-83132f946334", 00:17:41.599 "strip_size_kb": 0, 00:17:41.599 "state": "online", 00:17:41.599 "raid_level": "raid1", 00:17:41.599 "superblock": true, 00:17:41.599 "num_base_bdevs": 2, 00:17:41.599 "num_base_bdevs_discovered": 2, 00:17:41.599 "num_base_bdevs_operational": 2, 00:17:41.599 "base_bdevs_list": [ 00:17:41.599 { 00:17:41.599 "name": "spare", 00:17:41.599 "uuid": "ea2b874c-4338-5cca-a2da-97ee36b36cc8", 00:17:41.599 "is_configured": true, 00:17:41.599 "data_offset": 256, 00:17:41.599 "data_size": 7936 00:17:41.599 }, 00:17:41.599 { 00:17:41.599 "name": "BaseBdev2", 00:17:41.599 "uuid": "ba022b73-a502-5263-84cb-5370ec0b2980", 00:17:41.599 "is_configured": true, 00:17:41.599 "data_offset": 256, 00:17:41.599 "data_size": 7936 00:17:41.599 } 00:17:41.599 ] 00:17:41.599 }' 00:17:41.599 13:27:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:41.859 13:27:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:41.859 13:27:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:41.859 13:27:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:41.859 13:27:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:41.859 13:27:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:41.859 13:27:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:41.859 13:27:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:41.859 13:27:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:41.859 13:27:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:41.859 13:27:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:41.859 13:27:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:41.859 13:27:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:41.859 13:27:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:41.859 13:27:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:41.859 13:27:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:41.859 13:27:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.859 13:27:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:41.859 13:27:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.859 13:27:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:41.859 "name": "raid_bdev1", 00:17:41.859 "uuid": "14510339-2bdb-4d5a-bca7-83132f946334", 00:17:41.859 "strip_size_kb": 0, 00:17:41.859 "state": "online", 00:17:41.859 "raid_level": "raid1", 00:17:41.859 "superblock": true, 00:17:41.859 "num_base_bdevs": 2, 00:17:41.859 "num_base_bdevs_discovered": 2, 00:17:41.859 "num_base_bdevs_operational": 2, 00:17:41.859 "base_bdevs_list": [ 00:17:41.859 { 00:17:41.859 "name": "spare", 00:17:41.859 "uuid": "ea2b874c-4338-5cca-a2da-97ee36b36cc8", 00:17:41.859 "is_configured": true, 00:17:41.859 "data_offset": 256, 00:17:41.859 "data_size": 7936 00:17:41.859 }, 00:17:41.859 { 00:17:41.859 "name": "BaseBdev2", 00:17:41.859 "uuid": "ba022b73-a502-5263-84cb-5370ec0b2980", 00:17:41.859 "is_configured": true, 00:17:41.859 "data_offset": 256, 00:17:41.859 "data_size": 7936 00:17:41.859 } 00:17:41.859 ] 00:17:41.859 }' 00:17:41.859 13:27:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:41.859 13:27:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:42.119 13:27:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:42.119 13:27:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.119 13:27:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:42.119 [2024-11-17 13:27:31.292084] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:42.119 [2024-11-17 13:27:31.292159] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:42.119 [2024-11-17 13:27:31.292259] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:42.119 [2024-11-17 13:27:31.292361] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:42.119 [2024-11-17 13:27:31.292413] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:42.119 13:27:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.119 13:27:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:42.119 13:27:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # jq length 00:17:42.119 13:27:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.119 13:27:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:42.119 13:27:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.379 13:27:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:42.379 13:27:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:17:42.379 13:27:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:17:42.379 13:27:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:17:42.379 13:27:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:42.379 13:27:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:17:42.379 13:27:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:42.379 13:27:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:42.379 13:27:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:42.379 13:27:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:17:42.379 13:27:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:42.379 13:27:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:42.379 13:27:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:17:42.379 /dev/nbd0 00:17:42.379 13:27:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:42.379 13:27:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:42.379 13:27:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:42.379 13:27:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:17:42.379 13:27:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:42.379 13:27:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:42.379 13:27:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:42.379 13:27:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:17:42.379 13:27:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:42.379 13:27:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:42.379 13:27:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:42.379 1+0 records in 00:17:42.379 1+0 records out 00:17:42.379 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000387925 s, 10.6 MB/s 00:17:42.379 13:27:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:42.379 13:27:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:17:42.379 13:27:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:42.639 13:27:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:42.639 13:27:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:17:42.639 13:27:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:42.639 13:27:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:42.639 13:27:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:17:42.639 /dev/nbd1 00:17:42.639 13:27:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:42.639 13:27:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:42.639 13:27:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:17:42.639 13:27:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:17:42.639 13:27:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:42.639 13:27:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:42.639 13:27:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:17:42.639 13:27:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:17:42.639 13:27:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:42.639 13:27:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:42.639 13:27:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:42.639 1+0 records in 00:17:42.639 1+0 records out 00:17:42.639 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000488416 s, 8.4 MB/s 00:17:42.639 13:27:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:42.639 13:27:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:17:42.639 13:27:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:42.899 13:27:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:42.899 13:27:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:17:42.899 13:27:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:42.899 13:27:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:42.899 13:27:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:17:42.900 13:27:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:17:42.900 13:27:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:42.900 13:27:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:42.900 13:27:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:42.900 13:27:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:17:42.900 13:27:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:42.900 13:27:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:43.160 13:27:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:43.160 13:27:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:43.160 13:27:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:43.160 13:27:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:43.160 13:27:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:43.160 13:27:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:43.160 13:27:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:17:43.160 13:27:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:17:43.160 13:27:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:43.160 13:27:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:43.420 13:27:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:43.420 13:27:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:43.420 13:27:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:43.420 13:27:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:43.420 13:27:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:43.420 13:27:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:43.420 13:27:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:17:43.420 13:27:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:17:43.420 13:27:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:17:43.420 13:27:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:17:43.420 13:27:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.420 13:27:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:43.420 13:27:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.420 13:27:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:43.420 13:27:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.420 13:27:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:43.420 [2024-11-17 13:27:32.486267] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:43.420 [2024-11-17 13:27:32.486319] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:43.420 [2024-11-17 13:27:32.486340] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:17:43.420 [2024-11-17 13:27:32.486349] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:43.420 [2024-11-17 13:27:32.488243] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:43.420 [2024-11-17 13:27:32.488329] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:43.420 [2024-11-17 13:27:32.488393] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:43.420 [2024-11-17 13:27:32.488447] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:43.420 [2024-11-17 13:27:32.488579] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:43.420 spare 00:17:43.420 13:27:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.420 13:27:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:17:43.420 13:27:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.420 13:27:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:43.420 [2024-11-17 13:27:32.588454] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:17:43.420 [2024-11-17 13:27:32.588481] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:43.420 [2024-11-17 13:27:32.588570] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:17:43.420 [2024-11-17 13:27:32.588686] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:17:43.420 [2024-11-17 13:27:32.588693] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:17:43.420 [2024-11-17 13:27:32.588800] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:43.420 13:27:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.420 13:27:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:43.420 13:27:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:43.420 13:27:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:43.420 13:27:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:43.420 13:27:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:43.420 13:27:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:43.420 13:27:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:43.420 13:27:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:43.420 13:27:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:43.420 13:27:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:43.420 13:27:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:43.420 13:27:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:43.420 13:27:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.420 13:27:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:43.420 13:27:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.683 13:27:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:43.683 "name": "raid_bdev1", 00:17:43.683 "uuid": "14510339-2bdb-4d5a-bca7-83132f946334", 00:17:43.683 "strip_size_kb": 0, 00:17:43.683 "state": "online", 00:17:43.683 "raid_level": "raid1", 00:17:43.683 "superblock": true, 00:17:43.683 "num_base_bdevs": 2, 00:17:43.683 "num_base_bdevs_discovered": 2, 00:17:43.683 "num_base_bdevs_operational": 2, 00:17:43.683 "base_bdevs_list": [ 00:17:43.683 { 00:17:43.683 "name": "spare", 00:17:43.683 "uuid": "ea2b874c-4338-5cca-a2da-97ee36b36cc8", 00:17:43.683 "is_configured": true, 00:17:43.683 "data_offset": 256, 00:17:43.683 "data_size": 7936 00:17:43.683 }, 00:17:43.683 { 00:17:43.683 "name": "BaseBdev2", 00:17:43.683 "uuid": "ba022b73-a502-5263-84cb-5370ec0b2980", 00:17:43.683 "is_configured": true, 00:17:43.683 "data_offset": 256, 00:17:43.683 "data_size": 7936 00:17:43.683 } 00:17:43.683 ] 00:17:43.683 }' 00:17:43.683 13:27:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:43.683 13:27:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:43.943 13:27:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:43.943 13:27:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:43.943 13:27:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:43.943 13:27:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:43.943 13:27:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:43.943 13:27:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:43.943 13:27:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:43.943 13:27:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.943 13:27:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:43.943 13:27:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.943 13:27:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:43.943 "name": "raid_bdev1", 00:17:43.943 "uuid": "14510339-2bdb-4d5a-bca7-83132f946334", 00:17:43.943 "strip_size_kb": 0, 00:17:43.943 "state": "online", 00:17:43.943 "raid_level": "raid1", 00:17:43.943 "superblock": true, 00:17:43.943 "num_base_bdevs": 2, 00:17:43.943 "num_base_bdevs_discovered": 2, 00:17:43.943 "num_base_bdevs_operational": 2, 00:17:43.943 "base_bdevs_list": [ 00:17:43.943 { 00:17:43.943 "name": "spare", 00:17:43.943 "uuid": "ea2b874c-4338-5cca-a2da-97ee36b36cc8", 00:17:43.943 "is_configured": true, 00:17:43.943 "data_offset": 256, 00:17:43.943 "data_size": 7936 00:17:43.943 }, 00:17:43.943 { 00:17:43.943 "name": "BaseBdev2", 00:17:43.943 "uuid": "ba022b73-a502-5263-84cb-5370ec0b2980", 00:17:43.943 "is_configured": true, 00:17:43.943 "data_offset": 256, 00:17:43.943 "data_size": 7936 00:17:43.943 } 00:17:43.943 ] 00:17:43.943 }' 00:17:43.943 13:27:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:44.202 13:27:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:44.202 13:27:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:44.202 13:27:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:44.202 13:27:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:44.202 13:27:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:17:44.202 13:27:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.202 13:27:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:44.202 13:27:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.202 13:27:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:17:44.202 13:27:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:44.202 13:27:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.202 13:27:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:44.202 [2024-11-17 13:27:33.268948] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:44.202 13:27:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.202 13:27:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:44.202 13:27:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:44.202 13:27:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:44.202 13:27:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:44.202 13:27:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:44.202 13:27:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:44.202 13:27:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:44.202 13:27:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:44.202 13:27:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:44.202 13:27:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:44.202 13:27:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:44.202 13:27:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:44.202 13:27:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.202 13:27:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:44.202 13:27:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.202 13:27:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:44.202 "name": "raid_bdev1", 00:17:44.202 "uuid": "14510339-2bdb-4d5a-bca7-83132f946334", 00:17:44.202 "strip_size_kb": 0, 00:17:44.202 "state": "online", 00:17:44.202 "raid_level": "raid1", 00:17:44.202 "superblock": true, 00:17:44.202 "num_base_bdevs": 2, 00:17:44.202 "num_base_bdevs_discovered": 1, 00:17:44.202 "num_base_bdevs_operational": 1, 00:17:44.202 "base_bdevs_list": [ 00:17:44.203 { 00:17:44.203 "name": null, 00:17:44.203 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:44.203 "is_configured": false, 00:17:44.203 "data_offset": 0, 00:17:44.203 "data_size": 7936 00:17:44.203 }, 00:17:44.203 { 00:17:44.203 "name": "BaseBdev2", 00:17:44.203 "uuid": "ba022b73-a502-5263-84cb-5370ec0b2980", 00:17:44.203 "is_configured": true, 00:17:44.203 "data_offset": 256, 00:17:44.203 "data_size": 7936 00:17:44.203 } 00:17:44.203 ] 00:17:44.203 }' 00:17:44.203 13:27:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:44.203 13:27:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:44.771 13:27:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:44.771 13:27:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.771 13:27:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:44.771 [2024-11-17 13:27:33.704244] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:44.771 [2024-11-17 13:27:33.704427] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:44.771 [2024-11-17 13:27:33.704493] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:44.771 [2024-11-17 13:27:33.704557] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:44.771 [2024-11-17 13:27:33.717397] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:17:44.771 13:27:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.771 [2024-11-17 13:27:33.719093] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:44.771 13:27:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@757 -- # sleep 1 00:17:45.709 13:27:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:45.709 13:27:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:45.709 13:27:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:45.709 13:27:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:45.709 13:27:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:45.709 13:27:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:45.709 13:27:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.709 13:27:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:45.709 13:27:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:45.709 13:27:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.709 13:27:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:45.709 "name": "raid_bdev1", 00:17:45.709 "uuid": "14510339-2bdb-4d5a-bca7-83132f946334", 00:17:45.709 "strip_size_kb": 0, 00:17:45.709 "state": "online", 00:17:45.709 "raid_level": "raid1", 00:17:45.709 "superblock": true, 00:17:45.709 "num_base_bdevs": 2, 00:17:45.709 "num_base_bdevs_discovered": 2, 00:17:45.709 "num_base_bdevs_operational": 2, 00:17:45.709 "process": { 00:17:45.709 "type": "rebuild", 00:17:45.709 "target": "spare", 00:17:45.709 "progress": { 00:17:45.709 "blocks": 2560, 00:17:45.709 "percent": 32 00:17:45.709 } 00:17:45.709 }, 00:17:45.709 "base_bdevs_list": [ 00:17:45.709 { 00:17:45.709 "name": "spare", 00:17:45.710 "uuid": "ea2b874c-4338-5cca-a2da-97ee36b36cc8", 00:17:45.710 "is_configured": true, 00:17:45.710 "data_offset": 256, 00:17:45.710 "data_size": 7936 00:17:45.710 }, 00:17:45.710 { 00:17:45.710 "name": "BaseBdev2", 00:17:45.710 "uuid": "ba022b73-a502-5263-84cb-5370ec0b2980", 00:17:45.710 "is_configured": true, 00:17:45.710 "data_offset": 256, 00:17:45.710 "data_size": 7936 00:17:45.710 } 00:17:45.710 ] 00:17:45.710 }' 00:17:45.710 13:27:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:45.710 13:27:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:45.710 13:27:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:45.710 13:27:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:45.710 13:27:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:17:45.710 13:27:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.710 13:27:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:45.710 [2024-11-17 13:27:34.859536] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:45.710 [2024-11-17 13:27:34.923884] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:45.710 [2024-11-17 13:27:34.923940] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:45.710 [2024-11-17 13:27:34.923954] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:45.710 [2024-11-17 13:27:34.923974] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:45.970 13:27:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.970 13:27:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:45.970 13:27:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:45.970 13:27:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:45.970 13:27:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:45.970 13:27:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:45.970 13:27:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:45.970 13:27:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:45.970 13:27:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:45.970 13:27:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:45.970 13:27:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:45.970 13:27:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:45.970 13:27:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:45.970 13:27:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.970 13:27:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:45.970 13:27:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.970 13:27:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:45.970 "name": "raid_bdev1", 00:17:45.970 "uuid": "14510339-2bdb-4d5a-bca7-83132f946334", 00:17:45.970 "strip_size_kb": 0, 00:17:45.970 "state": "online", 00:17:45.970 "raid_level": "raid1", 00:17:45.970 "superblock": true, 00:17:45.970 "num_base_bdevs": 2, 00:17:45.970 "num_base_bdevs_discovered": 1, 00:17:45.970 "num_base_bdevs_operational": 1, 00:17:45.970 "base_bdevs_list": [ 00:17:45.970 { 00:17:45.970 "name": null, 00:17:45.970 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:45.970 "is_configured": false, 00:17:45.970 "data_offset": 0, 00:17:45.970 "data_size": 7936 00:17:45.970 }, 00:17:45.970 { 00:17:45.970 "name": "BaseBdev2", 00:17:45.970 "uuid": "ba022b73-a502-5263-84cb-5370ec0b2980", 00:17:45.970 "is_configured": true, 00:17:45.970 "data_offset": 256, 00:17:45.970 "data_size": 7936 00:17:45.970 } 00:17:45.970 ] 00:17:45.970 }' 00:17:45.970 13:27:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:45.970 13:27:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:46.231 13:27:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:46.231 13:27:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.231 13:27:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:46.231 [2024-11-17 13:27:35.379115] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:46.231 [2024-11-17 13:27:35.379224] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:46.231 [2024-11-17 13:27:35.379264] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:17:46.231 [2024-11-17 13:27:35.379295] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:46.231 [2024-11-17 13:27:35.379565] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:46.231 [2024-11-17 13:27:35.379620] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:46.231 [2024-11-17 13:27:35.379706] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:46.231 [2024-11-17 13:27:35.379744] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:46.231 [2024-11-17 13:27:35.379800] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:46.231 [2024-11-17 13:27:35.379841] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:46.231 [2024-11-17 13:27:35.392818] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:17:46.231 spare 00:17:46.231 13:27:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.231 [2024-11-17 13:27:35.394585] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:46.231 13:27:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@764 -- # sleep 1 00:17:47.612 13:27:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:47.612 13:27:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:47.612 13:27:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:47.612 13:27:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:47.612 13:27:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:47.612 13:27:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:47.612 13:27:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:47.612 13:27:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.612 13:27:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:47.612 13:27:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.612 13:27:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:47.612 "name": "raid_bdev1", 00:17:47.612 "uuid": "14510339-2bdb-4d5a-bca7-83132f946334", 00:17:47.612 "strip_size_kb": 0, 00:17:47.612 "state": "online", 00:17:47.612 "raid_level": "raid1", 00:17:47.612 "superblock": true, 00:17:47.612 "num_base_bdevs": 2, 00:17:47.612 "num_base_bdevs_discovered": 2, 00:17:47.612 "num_base_bdevs_operational": 2, 00:17:47.612 "process": { 00:17:47.612 "type": "rebuild", 00:17:47.612 "target": "spare", 00:17:47.612 "progress": { 00:17:47.612 "blocks": 2560, 00:17:47.612 "percent": 32 00:17:47.612 } 00:17:47.612 }, 00:17:47.612 "base_bdevs_list": [ 00:17:47.612 { 00:17:47.612 "name": "spare", 00:17:47.612 "uuid": "ea2b874c-4338-5cca-a2da-97ee36b36cc8", 00:17:47.612 "is_configured": true, 00:17:47.612 "data_offset": 256, 00:17:47.612 "data_size": 7936 00:17:47.613 }, 00:17:47.613 { 00:17:47.613 "name": "BaseBdev2", 00:17:47.613 "uuid": "ba022b73-a502-5263-84cb-5370ec0b2980", 00:17:47.613 "is_configured": true, 00:17:47.613 "data_offset": 256, 00:17:47.613 "data_size": 7936 00:17:47.613 } 00:17:47.613 ] 00:17:47.613 }' 00:17:47.613 13:27:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:47.613 13:27:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:47.613 13:27:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:47.613 13:27:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:47.613 13:27:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:17:47.613 13:27:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.613 13:27:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:47.613 [2024-11-17 13:27:36.535360] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:47.613 [2024-11-17 13:27:36.599196] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:47.613 [2024-11-17 13:27:36.599304] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:47.613 [2024-11-17 13:27:36.599340] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:47.613 [2024-11-17 13:27:36.599360] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:47.613 13:27:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.613 13:27:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:47.613 13:27:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:47.613 13:27:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:47.613 13:27:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:47.613 13:27:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:47.613 13:27:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:47.613 13:27:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:47.613 13:27:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:47.613 13:27:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:47.613 13:27:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:47.613 13:27:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:47.613 13:27:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:47.613 13:27:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.613 13:27:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:47.613 13:27:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.613 13:27:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:47.613 "name": "raid_bdev1", 00:17:47.613 "uuid": "14510339-2bdb-4d5a-bca7-83132f946334", 00:17:47.613 "strip_size_kb": 0, 00:17:47.613 "state": "online", 00:17:47.613 "raid_level": "raid1", 00:17:47.613 "superblock": true, 00:17:47.613 "num_base_bdevs": 2, 00:17:47.613 "num_base_bdevs_discovered": 1, 00:17:47.613 "num_base_bdevs_operational": 1, 00:17:47.613 "base_bdevs_list": [ 00:17:47.613 { 00:17:47.613 "name": null, 00:17:47.613 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:47.613 "is_configured": false, 00:17:47.613 "data_offset": 0, 00:17:47.613 "data_size": 7936 00:17:47.613 }, 00:17:47.613 { 00:17:47.613 "name": "BaseBdev2", 00:17:47.613 "uuid": "ba022b73-a502-5263-84cb-5370ec0b2980", 00:17:47.613 "is_configured": true, 00:17:47.613 "data_offset": 256, 00:17:47.613 "data_size": 7936 00:17:47.613 } 00:17:47.613 ] 00:17:47.613 }' 00:17:47.613 13:27:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:47.613 13:27:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:47.873 13:27:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:47.873 13:27:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:47.873 13:27:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:47.873 13:27:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:47.873 13:27:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:47.873 13:27:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:47.873 13:27:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:47.873 13:27:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.873 13:27:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:47.873 13:27:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.133 13:27:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:48.133 "name": "raid_bdev1", 00:17:48.133 "uuid": "14510339-2bdb-4d5a-bca7-83132f946334", 00:17:48.133 "strip_size_kb": 0, 00:17:48.133 "state": "online", 00:17:48.133 "raid_level": "raid1", 00:17:48.133 "superblock": true, 00:17:48.133 "num_base_bdevs": 2, 00:17:48.133 "num_base_bdevs_discovered": 1, 00:17:48.133 "num_base_bdevs_operational": 1, 00:17:48.133 "base_bdevs_list": [ 00:17:48.133 { 00:17:48.133 "name": null, 00:17:48.133 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:48.133 "is_configured": false, 00:17:48.133 "data_offset": 0, 00:17:48.133 "data_size": 7936 00:17:48.133 }, 00:17:48.133 { 00:17:48.133 "name": "BaseBdev2", 00:17:48.133 "uuid": "ba022b73-a502-5263-84cb-5370ec0b2980", 00:17:48.133 "is_configured": true, 00:17:48.133 "data_offset": 256, 00:17:48.133 "data_size": 7936 00:17:48.133 } 00:17:48.133 ] 00:17:48.133 }' 00:17:48.133 13:27:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:48.133 13:27:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:48.133 13:27:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:48.133 13:27:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:48.133 13:27:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:17:48.133 13:27:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.133 13:27:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:48.133 13:27:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.133 13:27:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:48.133 13:27:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.133 13:27:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:48.133 [2024-11-17 13:27:37.241988] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:48.133 [2024-11-17 13:27:37.242037] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:48.133 [2024-11-17 13:27:37.242060] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:17:48.133 [2024-11-17 13:27:37.242068] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:48.133 [2024-11-17 13:27:37.242270] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:48.134 [2024-11-17 13:27:37.242282] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:48.134 [2024-11-17 13:27:37.242324] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:17:48.134 [2024-11-17 13:27:37.242338] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:48.134 [2024-11-17 13:27:37.242348] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:48.134 [2024-11-17 13:27:37.242357] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:17:48.134 BaseBdev1 00:17:48.134 13:27:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.134 13:27:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@775 -- # sleep 1 00:17:49.073 13:27:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:49.073 13:27:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:49.073 13:27:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:49.073 13:27:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:49.073 13:27:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:49.073 13:27:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:49.073 13:27:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:49.073 13:27:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:49.073 13:27:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:49.073 13:27:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:49.073 13:27:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:49.073 13:27:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:49.073 13:27:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.073 13:27:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:49.073 13:27:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.333 13:27:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:49.333 "name": "raid_bdev1", 00:17:49.333 "uuid": "14510339-2bdb-4d5a-bca7-83132f946334", 00:17:49.333 "strip_size_kb": 0, 00:17:49.333 "state": "online", 00:17:49.333 "raid_level": "raid1", 00:17:49.333 "superblock": true, 00:17:49.333 "num_base_bdevs": 2, 00:17:49.333 "num_base_bdevs_discovered": 1, 00:17:49.333 "num_base_bdevs_operational": 1, 00:17:49.333 "base_bdevs_list": [ 00:17:49.333 { 00:17:49.333 "name": null, 00:17:49.333 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:49.333 "is_configured": false, 00:17:49.333 "data_offset": 0, 00:17:49.333 "data_size": 7936 00:17:49.333 }, 00:17:49.333 { 00:17:49.333 "name": "BaseBdev2", 00:17:49.333 "uuid": "ba022b73-a502-5263-84cb-5370ec0b2980", 00:17:49.333 "is_configured": true, 00:17:49.333 "data_offset": 256, 00:17:49.333 "data_size": 7936 00:17:49.333 } 00:17:49.333 ] 00:17:49.333 }' 00:17:49.333 13:27:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:49.333 13:27:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:49.593 13:27:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:49.593 13:27:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:49.593 13:27:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:49.593 13:27:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:49.593 13:27:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:49.593 13:27:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:49.593 13:27:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:49.593 13:27:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.593 13:27:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:49.593 13:27:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.593 13:27:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:49.593 "name": "raid_bdev1", 00:17:49.593 "uuid": "14510339-2bdb-4d5a-bca7-83132f946334", 00:17:49.593 "strip_size_kb": 0, 00:17:49.593 "state": "online", 00:17:49.593 "raid_level": "raid1", 00:17:49.593 "superblock": true, 00:17:49.593 "num_base_bdevs": 2, 00:17:49.593 "num_base_bdevs_discovered": 1, 00:17:49.593 "num_base_bdevs_operational": 1, 00:17:49.593 "base_bdevs_list": [ 00:17:49.593 { 00:17:49.593 "name": null, 00:17:49.593 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:49.593 "is_configured": false, 00:17:49.593 "data_offset": 0, 00:17:49.593 "data_size": 7936 00:17:49.593 }, 00:17:49.593 { 00:17:49.593 "name": "BaseBdev2", 00:17:49.593 "uuid": "ba022b73-a502-5263-84cb-5370ec0b2980", 00:17:49.593 "is_configured": true, 00:17:49.593 "data_offset": 256, 00:17:49.593 "data_size": 7936 00:17:49.593 } 00:17:49.593 ] 00:17:49.593 }' 00:17:49.593 13:27:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:49.593 13:27:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:49.593 13:27:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:49.593 13:27:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:49.593 13:27:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:49.593 13:27:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@652 -- # local es=0 00:17:49.593 13:27:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:49.593 13:27:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:49.593 13:27:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:49.593 13:27:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:49.593 13:27:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:49.593 13:27:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:49.593 13:27:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.593 13:27:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:49.593 [2024-11-17 13:27:38.799342] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:49.593 [2024-11-17 13:27:38.799460] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:49.593 [2024-11-17 13:27:38.799473] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:49.593 request: 00:17:49.593 { 00:17:49.593 "base_bdev": "BaseBdev1", 00:17:49.593 "raid_bdev": "raid_bdev1", 00:17:49.593 "method": "bdev_raid_add_base_bdev", 00:17:49.594 "req_id": 1 00:17:49.594 } 00:17:49.594 Got JSON-RPC error response 00:17:49.594 response: 00:17:49.594 { 00:17:49.594 "code": -22, 00:17:49.594 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:17:49.594 } 00:17:49.594 13:27:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:49.594 13:27:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@655 -- # es=1 00:17:49.594 13:27:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:49.594 13:27:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:49.594 13:27:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:49.594 13:27:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@779 -- # sleep 1 00:17:50.974 13:27:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:50.974 13:27:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:50.974 13:27:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:50.974 13:27:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:50.974 13:27:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:50.974 13:27:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:50.974 13:27:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:50.974 13:27:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:50.974 13:27:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:50.974 13:27:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:50.974 13:27:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:50.974 13:27:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.974 13:27:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:50.974 13:27:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:50.974 13:27:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.974 13:27:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:50.974 "name": "raid_bdev1", 00:17:50.974 "uuid": "14510339-2bdb-4d5a-bca7-83132f946334", 00:17:50.974 "strip_size_kb": 0, 00:17:50.974 "state": "online", 00:17:50.974 "raid_level": "raid1", 00:17:50.974 "superblock": true, 00:17:50.974 "num_base_bdevs": 2, 00:17:50.974 "num_base_bdevs_discovered": 1, 00:17:50.974 "num_base_bdevs_operational": 1, 00:17:50.974 "base_bdevs_list": [ 00:17:50.974 { 00:17:50.974 "name": null, 00:17:50.974 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:50.974 "is_configured": false, 00:17:50.974 "data_offset": 0, 00:17:50.974 "data_size": 7936 00:17:50.974 }, 00:17:50.974 { 00:17:50.974 "name": "BaseBdev2", 00:17:50.974 "uuid": "ba022b73-a502-5263-84cb-5370ec0b2980", 00:17:50.974 "is_configured": true, 00:17:50.974 "data_offset": 256, 00:17:50.974 "data_size": 7936 00:17:50.974 } 00:17:50.974 ] 00:17:50.974 }' 00:17:50.974 13:27:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:50.974 13:27:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:51.233 13:27:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:51.233 13:27:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:51.233 13:27:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:51.233 13:27:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:51.233 13:27:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:51.233 13:27:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:51.233 13:27:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:51.233 13:27:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.233 13:27:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:51.233 13:27:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.233 13:27:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:51.233 "name": "raid_bdev1", 00:17:51.233 "uuid": "14510339-2bdb-4d5a-bca7-83132f946334", 00:17:51.233 "strip_size_kb": 0, 00:17:51.233 "state": "online", 00:17:51.233 "raid_level": "raid1", 00:17:51.233 "superblock": true, 00:17:51.233 "num_base_bdevs": 2, 00:17:51.233 "num_base_bdevs_discovered": 1, 00:17:51.233 "num_base_bdevs_operational": 1, 00:17:51.233 "base_bdevs_list": [ 00:17:51.233 { 00:17:51.233 "name": null, 00:17:51.233 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:51.233 "is_configured": false, 00:17:51.233 "data_offset": 0, 00:17:51.233 "data_size": 7936 00:17:51.233 }, 00:17:51.233 { 00:17:51.233 "name": "BaseBdev2", 00:17:51.233 "uuid": "ba022b73-a502-5263-84cb-5370ec0b2980", 00:17:51.233 "is_configured": true, 00:17:51.233 "data_offset": 256, 00:17:51.233 "data_size": 7936 00:17:51.233 } 00:17:51.233 ] 00:17:51.233 }' 00:17:51.233 13:27:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:51.233 13:27:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:51.233 13:27:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:51.233 13:27:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:51.233 13:27:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@784 -- # killprocess 87626 00:17:51.233 13:27:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@954 -- # '[' -z 87626 ']' 00:17:51.233 13:27:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@958 -- # kill -0 87626 00:17:51.233 13:27:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@959 -- # uname 00:17:51.233 13:27:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:51.233 13:27:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87626 00:17:51.233 killing process with pid 87626 00:17:51.233 Received shutdown signal, test time was about 60.000000 seconds 00:17:51.233 00:17:51.233 Latency(us) 00:17:51.233 [2024-11-17T13:27:40.457Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:51.233 [2024-11-17T13:27:40.457Z] =================================================================================================================== 00:17:51.233 [2024-11-17T13:27:40.457Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:51.233 13:27:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:51.233 13:27:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:51.233 13:27:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87626' 00:17:51.233 13:27:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@973 -- # kill 87626 00:17:51.233 [2024-11-17 13:27:40.454754] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:51.233 [2024-11-17 13:27:40.454872] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:51.233 [2024-11-17 13:27:40.454915] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:51.233 [2024-11-17 13:27:40.454927] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:17:51.233 13:27:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@978 -- # wait 87626 00:17:51.802 [2024-11-17 13:27:40.803977] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:52.741 13:27:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@786 -- # return 0 00:17:52.741 00:17:52.741 real 0m19.911s 00:17:52.741 user 0m25.926s 00:17:52.741 sys 0m2.649s 00:17:52.741 13:27:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:52.741 ************************************ 00:17:52.741 END TEST raid_rebuild_test_sb_md_separate 00:17:52.741 ************************************ 00:17:52.741 13:27:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:53.001 13:27:41 bdev_raid -- bdev/bdev_raid.sh@1010 -- # base_malloc_params='-m 32 -i' 00:17:53.002 13:27:41 bdev_raid -- bdev/bdev_raid.sh@1011 -- # run_test raid_state_function_test_sb_md_interleaved raid_state_function_test raid1 2 true 00:17:53.002 13:27:41 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:17:53.002 13:27:41 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:53.002 13:27:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:53.002 ************************************ 00:17:53.002 START TEST raid_state_function_test_sb_md_interleaved 00:17:53.002 ************************************ 00:17:53.002 13:27:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:17:53.002 13:27:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:17:53.002 13:27:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:17:53.002 13:27:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:17:53.002 13:27:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:17:53.002 13:27:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:17:53.002 13:27:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:53.002 13:27:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:17:53.002 13:27:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:53.002 13:27:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:53.002 13:27:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:17:53.002 13:27:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:53.002 13:27:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:53.002 13:27:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:53.002 13:27:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:17:53.002 13:27:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:17:53.002 13:27:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # local strip_size 00:17:53.002 13:27:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:17:53.002 13:27:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:17:53.002 13:27:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:17:53.002 13:27:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:17:53.002 13:27:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:17:53.002 13:27:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:17:53.002 Process raid pid: 88314 00:17:53.002 13:27:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@229 -- # raid_pid=88314 00:17:53.002 13:27:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:17:53.002 13:27:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 88314' 00:17:53.002 13:27:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@231 -- # waitforlisten 88314 00:17:53.002 13:27:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 88314 ']' 00:17:53.002 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:53.002 13:27:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:53.002 13:27:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:53.002 13:27:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:53.002 13:27:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:53.002 13:27:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:53.002 [2024-11-17 13:27:42.129302] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:17:53.002 [2024-11-17 13:27:42.129436] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:53.262 [2024-11-17 13:27:42.312030] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:53.262 [2024-11-17 13:27:42.454651] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:53.523 [2024-11-17 13:27:42.689014] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:53.523 [2024-11-17 13:27:42.689053] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:53.783 13:27:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:53.783 13:27:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:17:53.783 13:27:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:53.783 13:27:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.783 13:27:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:53.783 [2024-11-17 13:27:42.942321] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:53.783 [2024-11-17 13:27:42.942444] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:53.783 [2024-11-17 13:27:42.942459] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:53.783 [2024-11-17 13:27:42.942469] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:53.783 13:27:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.783 13:27:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:53.783 13:27:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:53.783 13:27:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:53.783 13:27:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:53.783 13:27:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:53.783 13:27:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:53.783 13:27:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:53.783 13:27:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:53.783 13:27:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:53.783 13:27:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:53.783 13:27:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:53.783 13:27:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:53.783 13:27:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.783 13:27:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:53.783 13:27:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.783 13:27:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:53.783 "name": "Existed_Raid", 00:17:53.783 "uuid": "be509490-ac3f-40ed-b834-ef02507adcbd", 00:17:53.783 "strip_size_kb": 0, 00:17:53.784 "state": "configuring", 00:17:53.784 "raid_level": "raid1", 00:17:53.784 "superblock": true, 00:17:53.784 "num_base_bdevs": 2, 00:17:53.784 "num_base_bdevs_discovered": 0, 00:17:53.784 "num_base_bdevs_operational": 2, 00:17:53.784 "base_bdevs_list": [ 00:17:53.784 { 00:17:53.784 "name": "BaseBdev1", 00:17:53.784 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:53.784 "is_configured": false, 00:17:53.784 "data_offset": 0, 00:17:53.784 "data_size": 0 00:17:53.784 }, 00:17:53.784 { 00:17:53.784 "name": "BaseBdev2", 00:17:53.784 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:53.784 "is_configured": false, 00:17:53.784 "data_offset": 0, 00:17:53.784 "data_size": 0 00:17:53.784 } 00:17:53.784 ] 00:17:53.784 }' 00:17:53.784 13:27:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:53.784 13:27:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:54.355 13:27:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:54.355 13:27:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.355 13:27:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:54.355 [2024-11-17 13:27:43.405426] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:54.355 [2024-11-17 13:27:43.405501] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:17:54.355 13:27:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.355 13:27:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:54.355 13:27:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.355 13:27:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:54.355 [2024-11-17 13:27:43.413415] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:54.355 [2024-11-17 13:27:43.413494] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:54.355 [2024-11-17 13:27:43.413520] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:54.355 [2024-11-17 13:27:43.413546] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:54.355 13:27:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.355 13:27:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1 00:17:54.355 13:27:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.355 13:27:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:54.355 [2024-11-17 13:27:43.464100] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:54.355 BaseBdev1 00:17:54.355 13:27:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.355 13:27:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:17:54.355 13:27:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:17:54.355 13:27:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:54.355 13:27:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # local i 00:17:54.355 13:27:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:54.355 13:27:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:54.355 13:27:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:54.355 13:27:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.355 13:27:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:54.355 13:27:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.355 13:27:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:54.355 13:27:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.355 13:27:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:54.355 [ 00:17:54.355 { 00:17:54.355 "name": "BaseBdev1", 00:17:54.355 "aliases": [ 00:17:54.355 "3970c6a3-5e0b-4fcc-b249-52890859e6c6" 00:17:54.355 ], 00:17:54.355 "product_name": "Malloc disk", 00:17:54.355 "block_size": 4128, 00:17:54.355 "num_blocks": 8192, 00:17:54.355 "uuid": "3970c6a3-5e0b-4fcc-b249-52890859e6c6", 00:17:54.355 "md_size": 32, 00:17:54.355 "md_interleave": true, 00:17:54.355 "dif_type": 0, 00:17:54.355 "assigned_rate_limits": { 00:17:54.355 "rw_ios_per_sec": 0, 00:17:54.355 "rw_mbytes_per_sec": 0, 00:17:54.355 "r_mbytes_per_sec": 0, 00:17:54.355 "w_mbytes_per_sec": 0 00:17:54.355 }, 00:17:54.355 "claimed": true, 00:17:54.355 "claim_type": "exclusive_write", 00:17:54.355 "zoned": false, 00:17:54.355 "supported_io_types": { 00:17:54.355 "read": true, 00:17:54.355 "write": true, 00:17:54.355 "unmap": true, 00:17:54.355 "flush": true, 00:17:54.355 "reset": true, 00:17:54.355 "nvme_admin": false, 00:17:54.356 "nvme_io": false, 00:17:54.356 "nvme_io_md": false, 00:17:54.356 "write_zeroes": true, 00:17:54.356 "zcopy": true, 00:17:54.356 "get_zone_info": false, 00:17:54.356 "zone_management": false, 00:17:54.356 "zone_append": false, 00:17:54.356 "compare": false, 00:17:54.356 "compare_and_write": false, 00:17:54.356 "abort": true, 00:17:54.356 "seek_hole": false, 00:17:54.356 "seek_data": false, 00:17:54.356 "copy": true, 00:17:54.356 "nvme_iov_md": false 00:17:54.356 }, 00:17:54.356 "memory_domains": [ 00:17:54.356 { 00:17:54.356 "dma_device_id": "system", 00:17:54.356 "dma_device_type": 1 00:17:54.356 }, 00:17:54.356 { 00:17:54.356 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:54.356 "dma_device_type": 2 00:17:54.356 } 00:17:54.356 ], 00:17:54.356 "driver_specific": {} 00:17:54.356 } 00:17:54.356 ] 00:17:54.356 13:27:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.356 13:27:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@911 -- # return 0 00:17:54.356 13:27:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:54.356 13:27:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:54.356 13:27:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:54.356 13:27:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:54.356 13:27:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:54.356 13:27:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:54.356 13:27:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:54.356 13:27:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:54.356 13:27:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:54.356 13:27:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:54.356 13:27:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:54.356 13:27:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:54.356 13:27:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.356 13:27:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:54.356 13:27:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.356 13:27:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:54.356 "name": "Existed_Raid", 00:17:54.356 "uuid": "499aca8c-9d0e-45cf-864c-f48fd34f467b", 00:17:54.356 "strip_size_kb": 0, 00:17:54.356 "state": "configuring", 00:17:54.356 "raid_level": "raid1", 00:17:54.356 "superblock": true, 00:17:54.356 "num_base_bdevs": 2, 00:17:54.356 "num_base_bdevs_discovered": 1, 00:17:54.356 "num_base_bdevs_operational": 2, 00:17:54.356 "base_bdevs_list": [ 00:17:54.356 { 00:17:54.356 "name": "BaseBdev1", 00:17:54.356 "uuid": "3970c6a3-5e0b-4fcc-b249-52890859e6c6", 00:17:54.356 "is_configured": true, 00:17:54.356 "data_offset": 256, 00:17:54.356 "data_size": 7936 00:17:54.356 }, 00:17:54.356 { 00:17:54.356 "name": "BaseBdev2", 00:17:54.356 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:54.356 "is_configured": false, 00:17:54.356 "data_offset": 0, 00:17:54.356 "data_size": 0 00:17:54.356 } 00:17:54.356 ] 00:17:54.356 }' 00:17:54.356 13:27:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:54.356 13:27:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:54.927 13:27:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:54.927 13:27:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.927 13:27:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:54.927 [2024-11-17 13:27:43.931345] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:54.927 [2024-11-17 13:27:43.931394] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:17:54.927 13:27:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.927 13:27:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:54.927 13:27:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.927 13:27:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:54.927 [2024-11-17 13:27:43.943400] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:54.927 [2024-11-17 13:27:43.945545] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:54.927 [2024-11-17 13:27:43.945589] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:54.927 13:27:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.927 13:27:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:17:54.928 13:27:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:54.928 13:27:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:54.928 13:27:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:54.928 13:27:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:54.928 13:27:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:54.928 13:27:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:54.928 13:27:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:54.928 13:27:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:54.928 13:27:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:54.928 13:27:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:54.928 13:27:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:54.928 13:27:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:54.928 13:27:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:54.928 13:27:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.928 13:27:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:54.928 13:27:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.928 13:27:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:54.928 "name": "Existed_Raid", 00:17:54.928 "uuid": "d58a7552-33ff-4792-bbcc-d8171ddf66f8", 00:17:54.928 "strip_size_kb": 0, 00:17:54.928 "state": "configuring", 00:17:54.928 "raid_level": "raid1", 00:17:54.928 "superblock": true, 00:17:54.928 "num_base_bdevs": 2, 00:17:54.928 "num_base_bdevs_discovered": 1, 00:17:54.928 "num_base_bdevs_operational": 2, 00:17:54.928 "base_bdevs_list": [ 00:17:54.928 { 00:17:54.928 "name": "BaseBdev1", 00:17:54.928 "uuid": "3970c6a3-5e0b-4fcc-b249-52890859e6c6", 00:17:54.928 "is_configured": true, 00:17:54.928 "data_offset": 256, 00:17:54.928 "data_size": 7936 00:17:54.928 }, 00:17:54.928 { 00:17:54.928 "name": "BaseBdev2", 00:17:54.928 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:54.928 "is_configured": false, 00:17:54.928 "data_offset": 0, 00:17:54.928 "data_size": 0 00:17:54.928 } 00:17:54.928 ] 00:17:54.928 }' 00:17:54.928 13:27:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:54.928 13:27:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:55.188 13:27:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2 00:17:55.188 13:27:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.188 13:27:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:55.188 [2024-11-17 13:27:44.365372] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:55.188 [2024-11-17 13:27:44.365729] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:55.188 [2024-11-17 13:27:44.365782] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:17:55.188 [2024-11-17 13:27:44.365939] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:17:55.188 [2024-11-17 13:27:44.366083] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:55.188 [2024-11-17 13:27:44.366124] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:17:55.188 [2024-11-17 13:27:44.366269] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:55.188 BaseBdev2 00:17:55.188 13:27:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.188 13:27:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:17:55.188 13:27:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:17:55.188 13:27:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:55.188 13:27:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # local i 00:17:55.188 13:27:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:55.188 13:27:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:55.188 13:27:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:55.188 13:27:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.188 13:27:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:55.188 13:27:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.189 13:27:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:55.189 13:27:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.189 13:27:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:55.189 [ 00:17:55.189 { 00:17:55.189 "name": "BaseBdev2", 00:17:55.189 "aliases": [ 00:17:55.189 "4eb5e042-7b6b-4d60-b77d-fc1b785b79fe" 00:17:55.189 ], 00:17:55.189 "product_name": "Malloc disk", 00:17:55.189 "block_size": 4128, 00:17:55.189 "num_blocks": 8192, 00:17:55.189 "uuid": "4eb5e042-7b6b-4d60-b77d-fc1b785b79fe", 00:17:55.189 "md_size": 32, 00:17:55.189 "md_interleave": true, 00:17:55.189 "dif_type": 0, 00:17:55.189 "assigned_rate_limits": { 00:17:55.189 "rw_ios_per_sec": 0, 00:17:55.189 "rw_mbytes_per_sec": 0, 00:17:55.189 "r_mbytes_per_sec": 0, 00:17:55.189 "w_mbytes_per_sec": 0 00:17:55.189 }, 00:17:55.189 "claimed": true, 00:17:55.189 "claim_type": "exclusive_write", 00:17:55.189 "zoned": false, 00:17:55.189 "supported_io_types": { 00:17:55.189 "read": true, 00:17:55.189 "write": true, 00:17:55.189 "unmap": true, 00:17:55.189 "flush": true, 00:17:55.189 "reset": true, 00:17:55.189 "nvme_admin": false, 00:17:55.189 "nvme_io": false, 00:17:55.189 "nvme_io_md": false, 00:17:55.189 "write_zeroes": true, 00:17:55.189 "zcopy": true, 00:17:55.189 "get_zone_info": false, 00:17:55.189 "zone_management": false, 00:17:55.189 "zone_append": false, 00:17:55.189 "compare": false, 00:17:55.189 "compare_and_write": false, 00:17:55.189 "abort": true, 00:17:55.189 "seek_hole": false, 00:17:55.189 "seek_data": false, 00:17:55.189 "copy": true, 00:17:55.189 "nvme_iov_md": false 00:17:55.189 }, 00:17:55.189 "memory_domains": [ 00:17:55.189 { 00:17:55.189 "dma_device_id": "system", 00:17:55.189 "dma_device_type": 1 00:17:55.189 }, 00:17:55.189 { 00:17:55.189 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:55.189 "dma_device_type": 2 00:17:55.189 } 00:17:55.189 ], 00:17:55.189 "driver_specific": {} 00:17:55.189 } 00:17:55.189 ] 00:17:55.189 13:27:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.189 13:27:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@911 -- # return 0 00:17:55.189 13:27:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:55.189 13:27:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:55.189 13:27:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:17:55.189 13:27:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:55.189 13:27:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:55.189 13:27:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:55.189 13:27:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:55.189 13:27:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:55.189 13:27:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:55.189 13:27:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:55.189 13:27:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:55.189 13:27:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:55.449 13:27:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:55.449 13:27:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.449 13:27:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:55.449 13:27:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:55.449 13:27:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.449 13:27:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:55.449 "name": "Existed_Raid", 00:17:55.449 "uuid": "d58a7552-33ff-4792-bbcc-d8171ddf66f8", 00:17:55.449 "strip_size_kb": 0, 00:17:55.449 "state": "online", 00:17:55.449 "raid_level": "raid1", 00:17:55.449 "superblock": true, 00:17:55.449 "num_base_bdevs": 2, 00:17:55.449 "num_base_bdevs_discovered": 2, 00:17:55.449 "num_base_bdevs_operational": 2, 00:17:55.449 "base_bdevs_list": [ 00:17:55.449 { 00:17:55.449 "name": "BaseBdev1", 00:17:55.449 "uuid": "3970c6a3-5e0b-4fcc-b249-52890859e6c6", 00:17:55.449 "is_configured": true, 00:17:55.449 "data_offset": 256, 00:17:55.449 "data_size": 7936 00:17:55.449 }, 00:17:55.449 { 00:17:55.449 "name": "BaseBdev2", 00:17:55.449 "uuid": "4eb5e042-7b6b-4d60-b77d-fc1b785b79fe", 00:17:55.449 "is_configured": true, 00:17:55.449 "data_offset": 256, 00:17:55.449 "data_size": 7936 00:17:55.449 } 00:17:55.449 ] 00:17:55.449 }' 00:17:55.449 13:27:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:55.449 13:27:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:55.710 13:27:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:17:55.710 13:27:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:55.710 13:27:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:55.710 13:27:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:55.710 13:27:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:17:55.710 13:27:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:55.710 13:27:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:55.710 13:27:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:55.710 13:27:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.710 13:27:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:55.710 [2024-11-17 13:27:44.832951] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:55.710 13:27:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.710 13:27:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:55.710 "name": "Existed_Raid", 00:17:55.710 "aliases": [ 00:17:55.710 "d58a7552-33ff-4792-bbcc-d8171ddf66f8" 00:17:55.710 ], 00:17:55.710 "product_name": "Raid Volume", 00:17:55.710 "block_size": 4128, 00:17:55.710 "num_blocks": 7936, 00:17:55.710 "uuid": "d58a7552-33ff-4792-bbcc-d8171ddf66f8", 00:17:55.710 "md_size": 32, 00:17:55.710 "md_interleave": true, 00:17:55.710 "dif_type": 0, 00:17:55.710 "assigned_rate_limits": { 00:17:55.710 "rw_ios_per_sec": 0, 00:17:55.710 "rw_mbytes_per_sec": 0, 00:17:55.710 "r_mbytes_per_sec": 0, 00:17:55.710 "w_mbytes_per_sec": 0 00:17:55.710 }, 00:17:55.710 "claimed": false, 00:17:55.710 "zoned": false, 00:17:55.710 "supported_io_types": { 00:17:55.710 "read": true, 00:17:55.710 "write": true, 00:17:55.710 "unmap": false, 00:17:55.710 "flush": false, 00:17:55.710 "reset": true, 00:17:55.710 "nvme_admin": false, 00:17:55.710 "nvme_io": false, 00:17:55.710 "nvme_io_md": false, 00:17:55.710 "write_zeroes": true, 00:17:55.710 "zcopy": false, 00:17:55.710 "get_zone_info": false, 00:17:55.710 "zone_management": false, 00:17:55.710 "zone_append": false, 00:17:55.710 "compare": false, 00:17:55.710 "compare_and_write": false, 00:17:55.710 "abort": false, 00:17:55.710 "seek_hole": false, 00:17:55.710 "seek_data": false, 00:17:55.710 "copy": false, 00:17:55.710 "nvme_iov_md": false 00:17:55.710 }, 00:17:55.710 "memory_domains": [ 00:17:55.710 { 00:17:55.710 "dma_device_id": "system", 00:17:55.710 "dma_device_type": 1 00:17:55.710 }, 00:17:55.710 { 00:17:55.710 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:55.710 "dma_device_type": 2 00:17:55.710 }, 00:17:55.710 { 00:17:55.710 "dma_device_id": "system", 00:17:55.710 "dma_device_type": 1 00:17:55.710 }, 00:17:55.710 { 00:17:55.710 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:55.710 "dma_device_type": 2 00:17:55.710 } 00:17:55.710 ], 00:17:55.710 "driver_specific": { 00:17:55.710 "raid": { 00:17:55.710 "uuid": "d58a7552-33ff-4792-bbcc-d8171ddf66f8", 00:17:55.710 "strip_size_kb": 0, 00:17:55.710 "state": "online", 00:17:55.710 "raid_level": "raid1", 00:17:55.710 "superblock": true, 00:17:55.710 "num_base_bdevs": 2, 00:17:55.710 "num_base_bdevs_discovered": 2, 00:17:55.710 "num_base_bdevs_operational": 2, 00:17:55.710 "base_bdevs_list": [ 00:17:55.710 { 00:17:55.710 "name": "BaseBdev1", 00:17:55.710 "uuid": "3970c6a3-5e0b-4fcc-b249-52890859e6c6", 00:17:55.710 "is_configured": true, 00:17:55.710 "data_offset": 256, 00:17:55.710 "data_size": 7936 00:17:55.710 }, 00:17:55.710 { 00:17:55.710 "name": "BaseBdev2", 00:17:55.710 "uuid": "4eb5e042-7b6b-4d60-b77d-fc1b785b79fe", 00:17:55.710 "is_configured": true, 00:17:55.710 "data_offset": 256, 00:17:55.710 "data_size": 7936 00:17:55.710 } 00:17:55.710 ] 00:17:55.710 } 00:17:55.710 } 00:17:55.710 }' 00:17:55.710 13:27:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:55.710 13:27:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:17:55.710 BaseBdev2' 00:17:55.710 13:27:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:55.971 13:27:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:17:55.971 13:27:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:55.971 13:27:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:17:55.971 13:27:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:55.971 13:27:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.971 13:27:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:55.971 13:27:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.971 13:27:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:17:55.971 13:27:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:17:55.971 13:27:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:55.971 13:27:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:55.971 13:27:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.971 13:27:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:55.971 13:27:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:55.971 13:27:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.971 13:27:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:17:55.971 13:27:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:17:55.971 13:27:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:55.971 13:27:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.971 13:27:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:55.971 [2024-11-17 13:27:45.080273] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:55.971 13:27:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.971 13:27:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@260 -- # local expected_state 00:17:55.971 13:27:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:17:55.971 13:27:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:55.971 13:27:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:17:55.971 13:27:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:17:55.971 13:27:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:17:55.971 13:27:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:55.971 13:27:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:55.971 13:27:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:55.971 13:27:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:55.971 13:27:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:55.971 13:27:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:55.971 13:27:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:55.971 13:27:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:55.971 13:27:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:55.971 13:27:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:55.971 13:27:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:55.971 13:27:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.971 13:27:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:56.232 13:27:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.232 13:27:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:56.232 "name": "Existed_Raid", 00:17:56.232 "uuid": "d58a7552-33ff-4792-bbcc-d8171ddf66f8", 00:17:56.232 "strip_size_kb": 0, 00:17:56.232 "state": "online", 00:17:56.232 "raid_level": "raid1", 00:17:56.232 "superblock": true, 00:17:56.232 "num_base_bdevs": 2, 00:17:56.232 "num_base_bdevs_discovered": 1, 00:17:56.232 "num_base_bdevs_operational": 1, 00:17:56.232 "base_bdevs_list": [ 00:17:56.232 { 00:17:56.232 "name": null, 00:17:56.232 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:56.232 "is_configured": false, 00:17:56.232 "data_offset": 0, 00:17:56.232 "data_size": 7936 00:17:56.232 }, 00:17:56.232 { 00:17:56.232 "name": "BaseBdev2", 00:17:56.232 "uuid": "4eb5e042-7b6b-4d60-b77d-fc1b785b79fe", 00:17:56.232 "is_configured": true, 00:17:56.232 "data_offset": 256, 00:17:56.232 "data_size": 7936 00:17:56.232 } 00:17:56.232 ] 00:17:56.232 }' 00:17:56.232 13:27:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:56.232 13:27:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:56.493 13:27:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:17:56.493 13:27:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:56.493 13:27:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:56.493 13:27:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.493 13:27:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:56.493 13:27:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:56.493 13:27:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.493 13:27:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:56.493 13:27:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:56.493 13:27:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:17:56.493 13:27:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.493 13:27:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:56.493 [2024-11-17 13:27:45.694957] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:56.493 [2024-11-17 13:27:45.695084] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:56.753 [2024-11-17 13:27:45.796960] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:56.753 [2024-11-17 13:27:45.797071] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:56.753 [2024-11-17 13:27:45.797115] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:17:56.753 13:27:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.753 13:27:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:56.753 13:27:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:56.753 13:27:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:56.753 13:27:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.753 13:27:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:17:56.753 13:27:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:56.753 13:27:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.753 13:27:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:17:56.753 13:27:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:17:56.753 13:27:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:17:56.753 13:27:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@326 -- # killprocess 88314 00:17:56.753 13:27:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 88314 ']' 00:17:56.753 13:27:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 88314 00:17:56.753 13:27:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:17:56.753 13:27:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:56.753 13:27:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88314 00:17:56.753 13:27:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:56.753 13:27:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:56.753 killing process with pid 88314 00:17:56.753 13:27:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88314' 00:17:56.753 13:27:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@973 -- # kill 88314 00:17:56.753 [2024-11-17 13:27:45.895002] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:56.753 13:27:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@978 -- # wait 88314 00:17:56.753 [2024-11-17 13:27:45.911748] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:58.137 13:27:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@328 -- # return 0 00:17:58.137 00:17:58.137 real 0m5.046s 00:17:58.137 user 0m7.073s 00:17:58.137 sys 0m1.024s 00:17:58.137 ************************************ 00:17:58.137 END TEST raid_state_function_test_sb_md_interleaved 00:17:58.137 ************************************ 00:17:58.137 13:27:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:58.137 13:27:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:58.137 13:27:47 bdev_raid -- bdev/bdev_raid.sh@1012 -- # run_test raid_superblock_test_md_interleaved raid_superblock_test raid1 2 00:17:58.137 13:27:47 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:17:58.137 13:27:47 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:58.137 13:27:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:58.137 ************************************ 00:17:58.137 START TEST raid_superblock_test_md_interleaved 00:17:58.137 ************************************ 00:17:58.137 13:27:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:17:58.137 13:27:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:17:58.137 13:27:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:17:58.137 13:27:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:17:58.137 13:27:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:17:58.137 13:27:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:17:58.137 13:27:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:17:58.137 13:27:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:17:58.137 13:27:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:17:58.137 13:27:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:17:58.137 13:27:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@399 -- # local strip_size 00:17:58.137 13:27:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:17:58.137 13:27:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:17:58.137 13:27:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:17:58.137 13:27:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:17:58.137 13:27:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:17:58.137 13:27:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # raid_pid=88566 00:17:58.137 13:27:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:17:58.137 13:27:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@413 -- # waitforlisten 88566 00:17:58.137 13:27:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 88566 ']' 00:17:58.137 13:27:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:58.137 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:58.137 13:27:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:58.137 13:27:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:58.137 13:27:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:58.137 13:27:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:58.137 [2024-11-17 13:27:47.242386] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:17:58.137 [2024-11-17 13:27:47.242512] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88566 ] 00:17:58.397 [2024-11-17 13:27:47.418600] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:58.397 [2024-11-17 13:27:47.551618] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:58.657 [2024-11-17 13:27:47.778060] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:58.657 [2024-11-17 13:27:47.778111] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:58.918 13:27:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:58.918 13:27:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:17:58.918 13:27:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:17:58.918 13:27:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:58.918 13:27:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:17:58.918 13:27:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:17:58.918 13:27:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:58.918 13:27:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:58.918 13:27:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:58.918 13:27:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:58.918 13:27:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc1 00:17:58.918 13:27:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.918 13:27:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:58.918 malloc1 00:17:58.918 13:27:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.918 13:27:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:58.918 13:27:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.918 13:27:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:58.918 [2024-11-17 13:27:48.121981] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:58.918 [2024-11-17 13:27:48.122084] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:58.918 [2024-11-17 13:27:48.122127] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:58.918 [2024-11-17 13:27:48.122155] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:58.918 [2024-11-17 13:27:48.124347] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:58.918 [2024-11-17 13:27:48.124433] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:58.918 pt1 00:17:58.918 13:27:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.918 13:27:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:58.918 13:27:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:58.918 13:27:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:17:58.918 13:27:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:17:58.918 13:27:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:58.918 13:27:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:58.918 13:27:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:58.918 13:27:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:58.918 13:27:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc2 00:17:58.918 13:27:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.918 13:27:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:59.178 malloc2 00:17:59.178 13:27:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.178 13:27:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:59.178 13:27:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.178 13:27:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:59.178 [2024-11-17 13:27:48.188014] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:59.178 [2024-11-17 13:27:48.188072] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:59.178 [2024-11-17 13:27:48.188097] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:59.178 [2024-11-17 13:27:48.188117] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:59.178 [2024-11-17 13:27:48.190253] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:59.178 [2024-11-17 13:27:48.190285] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:59.178 pt2 00:17:59.178 13:27:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.178 13:27:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:59.178 13:27:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:59.178 13:27:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:17:59.178 13:27:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.178 13:27:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:59.178 [2024-11-17 13:27:48.200032] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:59.178 [2024-11-17 13:27:48.202144] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:59.178 [2024-11-17 13:27:48.202449] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:59.178 [2024-11-17 13:27:48.202468] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:17:59.178 [2024-11-17 13:27:48.202543] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:17:59.178 [2024-11-17 13:27:48.202616] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:59.178 [2024-11-17 13:27:48.202627] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:59.178 [2024-11-17 13:27:48.202697] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:59.178 13:27:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.178 13:27:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:59.178 13:27:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:59.178 13:27:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:59.178 13:27:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:59.178 13:27:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:59.179 13:27:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:59.179 13:27:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:59.179 13:27:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:59.179 13:27:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:59.179 13:27:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:59.179 13:27:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:59.179 13:27:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:59.179 13:27:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.179 13:27:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:59.179 13:27:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.179 13:27:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:59.179 "name": "raid_bdev1", 00:17:59.179 "uuid": "8b08b486-c4fd-450b-accf-0b6f64e7b188", 00:17:59.179 "strip_size_kb": 0, 00:17:59.179 "state": "online", 00:17:59.179 "raid_level": "raid1", 00:17:59.179 "superblock": true, 00:17:59.179 "num_base_bdevs": 2, 00:17:59.179 "num_base_bdevs_discovered": 2, 00:17:59.179 "num_base_bdevs_operational": 2, 00:17:59.179 "base_bdevs_list": [ 00:17:59.179 { 00:17:59.179 "name": "pt1", 00:17:59.179 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:59.179 "is_configured": true, 00:17:59.179 "data_offset": 256, 00:17:59.179 "data_size": 7936 00:17:59.179 }, 00:17:59.179 { 00:17:59.179 "name": "pt2", 00:17:59.179 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:59.179 "is_configured": true, 00:17:59.179 "data_offset": 256, 00:17:59.179 "data_size": 7936 00:17:59.179 } 00:17:59.179 ] 00:17:59.179 }' 00:17:59.179 13:27:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:59.179 13:27:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:59.439 13:27:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:17:59.439 13:27:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:59.439 13:27:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:59.439 13:27:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:59.439 13:27:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:17:59.439 13:27:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:59.439 13:27:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:59.699 13:27:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:59.699 13:27:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.699 13:27:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:59.699 [2024-11-17 13:27:48.667475] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:59.699 13:27:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.699 13:27:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:59.699 "name": "raid_bdev1", 00:17:59.699 "aliases": [ 00:17:59.699 "8b08b486-c4fd-450b-accf-0b6f64e7b188" 00:17:59.699 ], 00:17:59.700 "product_name": "Raid Volume", 00:17:59.700 "block_size": 4128, 00:17:59.700 "num_blocks": 7936, 00:17:59.700 "uuid": "8b08b486-c4fd-450b-accf-0b6f64e7b188", 00:17:59.700 "md_size": 32, 00:17:59.700 "md_interleave": true, 00:17:59.700 "dif_type": 0, 00:17:59.700 "assigned_rate_limits": { 00:17:59.700 "rw_ios_per_sec": 0, 00:17:59.700 "rw_mbytes_per_sec": 0, 00:17:59.700 "r_mbytes_per_sec": 0, 00:17:59.700 "w_mbytes_per_sec": 0 00:17:59.700 }, 00:17:59.700 "claimed": false, 00:17:59.700 "zoned": false, 00:17:59.700 "supported_io_types": { 00:17:59.700 "read": true, 00:17:59.700 "write": true, 00:17:59.700 "unmap": false, 00:17:59.700 "flush": false, 00:17:59.700 "reset": true, 00:17:59.700 "nvme_admin": false, 00:17:59.700 "nvme_io": false, 00:17:59.700 "nvme_io_md": false, 00:17:59.700 "write_zeroes": true, 00:17:59.700 "zcopy": false, 00:17:59.700 "get_zone_info": false, 00:17:59.700 "zone_management": false, 00:17:59.700 "zone_append": false, 00:17:59.700 "compare": false, 00:17:59.700 "compare_and_write": false, 00:17:59.700 "abort": false, 00:17:59.700 "seek_hole": false, 00:17:59.700 "seek_data": false, 00:17:59.700 "copy": false, 00:17:59.700 "nvme_iov_md": false 00:17:59.700 }, 00:17:59.700 "memory_domains": [ 00:17:59.700 { 00:17:59.700 "dma_device_id": "system", 00:17:59.700 "dma_device_type": 1 00:17:59.700 }, 00:17:59.700 { 00:17:59.700 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:59.700 "dma_device_type": 2 00:17:59.700 }, 00:17:59.700 { 00:17:59.700 "dma_device_id": "system", 00:17:59.700 "dma_device_type": 1 00:17:59.700 }, 00:17:59.700 { 00:17:59.700 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:59.700 "dma_device_type": 2 00:17:59.700 } 00:17:59.700 ], 00:17:59.700 "driver_specific": { 00:17:59.700 "raid": { 00:17:59.700 "uuid": "8b08b486-c4fd-450b-accf-0b6f64e7b188", 00:17:59.700 "strip_size_kb": 0, 00:17:59.700 "state": "online", 00:17:59.700 "raid_level": "raid1", 00:17:59.700 "superblock": true, 00:17:59.700 "num_base_bdevs": 2, 00:17:59.700 "num_base_bdevs_discovered": 2, 00:17:59.700 "num_base_bdevs_operational": 2, 00:17:59.700 "base_bdevs_list": [ 00:17:59.700 { 00:17:59.700 "name": "pt1", 00:17:59.700 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:59.700 "is_configured": true, 00:17:59.700 "data_offset": 256, 00:17:59.700 "data_size": 7936 00:17:59.700 }, 00:17:59.700 { 00:17:59.700 "name": "pt2", 00:17:59.700 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:59.700 "is_configured": true, 00:17:59.700 "data_offset": 256, 00:17:59.700 "data_size": 7936 00:17:59.700 } 00:17:59.700 ] 00:17:59.700 } 00:17:59.700 } 00:17:59.700 }' 00:17:59.700 13:27:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:59.700 13:27:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:59.700 pt2' 00:17:59.700 13:27:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:59.700 13:27:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:17:59.700 13:27:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:59.700 13:27:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:59.700 13:27:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:59.700 13:27:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.700 13:27:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:59.700 13:27:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.700 13:27:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:17:59.700 13:27:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:17:59.700 13:27:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:59.700 13:27:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:59.700 13:27:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.700 13:27:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:59.700 13:27:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:59.700 13:27:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.700 13:27:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:17:59.700 13:27:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:17:59.700 13:27:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:59.700 13:27:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:17:59.700 13:27:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.700 13:27:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:59.700 [2024-11-17 13:27:48.903080] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:59.700 13:27:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.961 13:27:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=8b08b486-c4fd-450b-accf-0b6f64e7b188 00:17:59.961 13:27:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@436 -- # '[' -z 8b08b486-c4fd-450b-accf-0b6f64e7b188 ']' 00:17:59.961 13:27:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:59.961 13:27:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.961 13:27:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:59.961 [2024-11-17 13:27:48.946738] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:59.961 [2024-11-17 13:27:48.946759] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:59.961 [2024-11-17 13:27:48.946839] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:59.961 [2024-11-17 13:27:48.946902] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:59.961 [2024-11-17 13:27:48.946914] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:59.961 13:27:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.961 13:27:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:59.961 13:27:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.961 13:27:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:17:59.961 13:27:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:59.961 13:27:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.961 13:27:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:17:59.961 13:27:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:17:59.961 13:27:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:59.961 13:27:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:17:59.961 13:27:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.961 13:27:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:59.961 13:27:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.961 13:27:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:59.961 13:27:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:17:59.961 13:27:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.961 13:27:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:59.961 13:27:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.961 13:27:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:17:59.961 13:27:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.961 13:27:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:59.961 13:27:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:59.961 13:27:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.961 13:27:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:17:59.961 13:27:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:59.961 13:27:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@652 -- # local es=0 00:17:59.961 13:27:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:59.961 13:27:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:59.961 13:27:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:59.961 13:27:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:59.961 13:27:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:59.961 13:27:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:59.961 13:27:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.961 13:27:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:59.961 [2024-11-17 13:27:49.082525] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:59.961 [2024-11-17 13:27:49.084684] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:59.961 [2024-11-17 13:27:49.084813] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:17:59.961 [2024-11-17 13:27:49.084936] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:17:59.961 [2024-11-17 13:27:49.084995] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:59.961 [2024-11-17 13:27:49.085034] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:17:59.961 request: 00:17:59.961 { 00:17:59.961 "name": "raid_bdev1", 00:17:59.961 "raid_level": "raid1", 00:17:59.961 "base_bdevs": [ 00:17:59.961 "malloc1", 00:17:59.961 "malloc2" 00:17:59.961 ], 00:17:59.961 "superblock": false, 00:17:59.961 "method": "bdev_raid_create", 00:17:59.961 "req_id": 1 00:17:59.961 } 00:17:59.961 Got JSON-RPC error response 00:17:59.961 response: 00:17:59.961 { 00:17:59.961 "code": -17, 00:17:59.961 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:59.961 } 00:17:59.961 13:27:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:59.961 13:27:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@655 -- # es=1 00:17:59.961 13:27:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:59.961 13:27:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:59.961 13:27:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:59.961 13:27:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:59.962 13:27:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.962 13:27:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:17:59.962 13:27:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:59.962 13:27:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.962 13:27:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:17:59.962 13:27:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:17:59.962 13:27:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:59.962 13:27:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.962 13:27:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:59.962 [2024-11-17 13:27:49.150394] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:59.962 [2024-11-17 13:27:49.150441] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:59.962 [2024-11-17 13:27:49.150456] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:59.962 [2024-11-17 13:27:49.150467] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:59.962 [2024-11-17 13:27:49.152547] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:59.962 [2024-11-17 13:27:49.152585] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:59.962 [2024-11-17 13:27:49.152629] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:59.962 [2024-11-17 13:27:49.152698] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:59.962 pt1 00:17:59.962 13:27:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.962 13:27:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:17:59.962 13:27:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:59.962 13:27:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:59.962 13:27:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:59.962 13:27:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:59.962 13:27:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:59.962 13:27:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:59.962 13:27:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:59.962 13:27:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:59.962 13:27:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:59.962 13:27:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:59.962 13:27:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:59.962 13:27:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.962 13:27:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:59.962 13:27:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.222 13:27:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:00.222 "name": "raid_bdev1", 00:18:00.222 "uuid": "8b08b486-c4fd-450b-accf-0b6f64e7b188", 00:18:00.222 "strip_size_kb": 0, 00:18:00.222 "state": "configuring", 00:18:00.222 "raid_level": "raid1", 00:18:00.222 "superblock": true, 00:18:00.222 "num_base_bdevs": 2, 00:18:00.222 "num_base_bdevs_discovered": 1, 00:18:00.222 "num_base_bdevs_operational": 2, 00:18:00.222 "base_bdevs_list": [ 00:18:00.222 { 00:18:00.222 "name": "pt1", 00:18:00.222 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:00.222 "is_configured": true, 00:18:00.222 "data_offset": 256, 00:18:00.222 "data_size": 7936 00:18:00.222 }, 00:18:00.222 { 00:18:00.222 "name": null, 00:18:00.222 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:00.222 "is_configured": false, 00:18:00.222 "data_offset": 256, 00:18:00.222 "data_size": 7936 00:18:00.222 } 00:18:00.222 ] 00:18:00.222 }' 00:18:00.222 13:27:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:00.222 13:27:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:00.482 13:27:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:18:00.482 13:27:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:18:00.482 13:27:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:00.482 13:27:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:00.482 13:27:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.482 13:27:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:00.482 [2024-11-17 13:27:49.613634] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:00.482 [2024-11-17 13:27:49.613747] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:00.482 [2024-11-17 13:27:49.613791] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:18:00.482 [2024-11-17 13:27:49.613821] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:00.482 [2024-11-17 13:27:49.614058] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:00.482 [2024-11-17 13:27:49.614106] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:00.482 [2024-11-17 13:27:49.614197] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:00.482 [2024-11-17 13:27:49.614270] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:00.482 [2024-11-17 13:27:49.614408] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:00.482 [2024-11-17 13:27:49.614448] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:00.482 [2024-11-17 13:27:49.614564] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:18:00.482 [2024-11-17 13:27:49.614687] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:00.482 [2024-11-17 13:27:49.614725] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:18:00.482 [2024-11-17 13:27:49.614867] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:00.482 pt2 00:18:00.482 13:27:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.482 13:27:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:18:00.482 13:27:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:00.482 13:27:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:00.482 13:27:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:00.482 13:27:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:00.482 13:27:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:00.482 13:27:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:00.482 13:27:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:00.482 13:27:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:00.482 13:27:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:00.482 13:27:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:00.482 13:27:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:00.482 13:27:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:00.483 13:27:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:00.483 13:27:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.483 13:27:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:00.483 13:27:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.483 13:27:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:00.483 "name": "raid_bdev1", 00:18:00.483 "uuid": "8b08b486-c4fd-450b-accf-0b6f64e7b188", 00:18:00.483 "strip_size_kb": 0, 00:18:00.483 "state": "online", 00:18:00.483 "raid_level": "raid1", 00:18:00.483 "superblock": true, 00:18:00.483 "num_base_bdevs": 2, 00:18:00.483 "num_base_bdevs_discovered": 2, 00:18:00.483 "num_base_bdevs_operational": 2, 00:18:00.483 "base_bdevs_list": [ 00:18:00.483 { 00:18:00.483 "name": "pt1", 00:18:00.483 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:00.483 "is_configured": true, 00:18:00.483 "data_offset": 256, 00:18:00.483 "data_size": 7936 00:18:00.483 }, 00:18:00.483 { 00:18:00.483 "name": "pt2", 00:18:00.483 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:00.483 "is_configured": true, 00:18:00.483 "data_offset": 256, 00:18:00.483 "data_size": 7936 00:18:00.483 } 00:18:00.483 ] 00:18:00.483 }' 00:18:00.483 13:27:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:00.483 13:27:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:01.053 13:27:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:18:01.053 13:27:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:01.053 13:27:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:01.053 13:27:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:01.053 13:27:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:18:01.053 13:27:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:01.053 13:27:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:01.053 13:27:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:01.053 13:27:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.053 13:27:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:01.053 [2024-11-17 13:27:50.057119] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:01.053 13:27:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.053 13:27:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:01.053 "name": "raid_bdev1", 00:18:01.053 "aliases": [ 00:18:01.053 "8b08b486-c4fd-450b-accf-0b6f64e7b188" 00:18:01.053 ], 00:18:01.053 "product_name": "Raid Volume", 00:18:01.053 "block_size": 4128, 00:18:01.053 "num_blocks": 7936, 00:18:01.053 "uuid": "8b08b486-c4fd-450b-accf-0b6f64e7b188", 00:18:01.053 "md_size": 32, 00:18:01.053 "md_interleave": true, 00:18:01.053 "dif_type": 0, 00:18:01.053 "assigned_rate_limits": { 00:18:01.053 "rw_ios_per_sec": 0, 00:18:01.053 "rw_mbytes_per_sec": 0, 00:18:01.053 "r_mbytes_per_sec": 0, 00:18:01.053 "w_mbytes_per_sec": 0 00:18:01.053 }, 00:18:01.053 "claimed": false, 00:18:01.053 "zoned": false, 00:18:01.053 "supported_io_types": { 00:18:01.053 "read": true, 00:18:01.053 "write": true, 00:18:01.053 "unmap": false, 00:18:01.053 "flush": false, 00:18:01.053 "reset": true, 00:18:01.053 "nvme_admin": false, 00:18:01.053 "nvme_io": false, 00:18:01.053 "nvme_io_md": false, 00:18:01.053 "write_zeroes": true, 00:18:01.053 "zcopy": false, 00:18:01.053 "get_zone_info": false, 00:18:01.053 "zone_management": false, 00:18:01.053 "zone_append": false, 00:18:01.053 "compare": false, 00:18:01.053 "compare_and_write": false, 00:18:01.053 "abort": false, 00:18:01.053 "seek_hole": false, 00:18:01.053 "seek_data": false, 00:18:01.053 "copy": false, 00:18:01.053 "nvme_iov_md": false 00:18:01.053 }, 00:18:01.053 "memory_domains": [ 00:18:01.053 { 00:18:01.053 "dma_device_id": "system", 00:18:01.053 "dma_device_type": 1 00:18:01.053 }, 00:18:01.053 { 00:18:01.053 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:01.053 "dma_device_type": 2 00:18:01.053 }, 00:18:01.053 { 00:18:01.053 "dma_device_id": "system", 00:18:01.053 "dma_device_type": 1 00:18:01.053 }, 00:18:01.053 { 00:18:01.053 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:01.053 "dma_device_type": 2 00:18:01.053 } 00:18:01.053 ], 00:18:01.053 "driver_specific": { 00:18:01.053 "raid": { 00:18:01.053 "uuid": "8b08b486-c4fd-450b-accf-0b6f64e7b188", 00:18:01.053 "strip_size_kb": 0, 00:18:01.053 "state": "online", 00:18:01.053 "raid_level": "raid1", 00:18:01.053 "superblock": true, 00:18:01.053 "num_base_bdevs": 2, 00:18:01.053 "num_base_bdevs_discovered": 2, 00:18:01.053 "num_base_bdevs_operational": 2, 00:18:01.053 "base_bdevs_list": [ 00:18:01.053 { 00:18:01.053 "name": "pt1", 00:18:01.053 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:01.053 "is_configured": true, 00:18:01.053 "data_offset": 256, 00:18:01.053 "data_size": 7936 00:18:01.053 }, 00:18:01.053 { 00:18:01.053 "name": "pt2", 00:18:01.053 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:01.053 "is_configured": true, 00:18:01.053 "data_offset": 256, 00:18:01.053 "data_size": 7936 00:18:01.053 } 00:18:01.053 ] 00:18:01.053 } 00:18:01.053 } 00:18:01.053 }' 00:18:01.053 13:27:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:01.053 13:27:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:01.053 pt2' 00:18:01.053 13:27:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:01.053 13:27:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:18:01.053 13:27:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:01.053 13:27:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:01.053 13:27:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.053 13:27:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:01.053 13:27:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:01.053 13:27:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.053 13:27:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:18:01.053 13:27:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:18:01.053 13:27:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:01.053 13:27:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:01.053 13:27:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:01.053 13:27:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.053 13:27:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:01.053 13:27:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.053 13:27:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:18:01.053 13:27:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:18:01.314 13:27:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:01.314 13:27:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.314 13:27:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:18:01.314 13:27:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:01.314 [2024-11-17 13:27:50.288718] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:01.314 13:27:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.314 13:27:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # '[' 8b08b486-c4fd-450b-accf-0b6f64e7b188 '!=' 8b08b486-c4fd-450b-accf-0b6f64e7b188 ']' 00:18:01.314 13:27:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:18:01.314 13:27:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:01.314 13:27:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:18:01.314 13:27:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:18:01.315 13:27:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.315 13:27:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:01.315 [2024-11-17 13:27:50.332433] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:18:01.315 13:27:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.315 13:27:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:01.315 13:27:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:01.315 13:27:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:01.315 13:27:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:01.315 13:27:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:01.315 13:27:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:01.315 13:27:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:01.315 13:27:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:01.315 13:27:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:01.315 13:27:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:01.315 13:27:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:01.315 13:27:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.315 13:27:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:01.315 13:27:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:01.315 13:27:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.315 13:27:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:01.315 "name": "raid_bdev1", 00:18:01.315 "uuid": "8b08b486-c4fd-450b-accf-0b6f64e7b188", 00:18:01.315 "strip_size_kb": 0, 00:18:01.315 "state": "online", 00:18:01.315 "raid_level": "raid1", 00:18:01.315 "superblock": true, 00:18:01.315 "num_base_bdevs": 2, 00:18:01.315 "num_base_bdevs_discovered": 1, 00:18:01.315 "num_base_bdevs_operational": 1, 00:18:01.315 "base_bdevs_list": [ 00:18:01.315 { 00:18:01.315 "name": null, 00:18:01.315 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:01.315 "is_configured": false, 00:18:01.315 "data_offset": 0, 00:18:01.315 "data_size": 7936 00:18:01.315 }, 00:18:01.315 { 00:18:01.315 "name": "pt2", 00:18:01.315 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:01.315 "is_configured": true, 00:18:01.315 "data_offset": 256, 00:18:01.315 "data_size": 7936 00:18:01.315 } 00:18:01.315 ] 00:18:01.315 }' 00:18:01.315 13:27:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:01.315 13:27:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:01.575 13:27:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:01.575 13:27:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.575 13:27:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:01.575 [2024-11-17 13:27:50.775616] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:01.575 [2024-11-17 13:27:50.775680] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:01.575 [2024-11-17 13:27:50.775765] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:01.575 [2024-11-17 13:27:50.775870] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:01.575 [2024-11-17 13:27:50.775925] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:18:01.575 13:27:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.575 13:27:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:01.575 13:27:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:18:01.575 13:27:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.575 13:27:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:01.575 13:27:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.835 13:27:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:18:01.836 13:27:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:18:01.836 13:27:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:18:01.836 13:27:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:01.836 13:27:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:18:01.836 13:27:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.836 13:27:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:01.836 13:27:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.836 13:27:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:18:01.836 13:27:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:01.836 13:27:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:18:01.836 13:27:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:18:01.836 13:27:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@519 -- # i=1 00:18:01.836 13:27:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:01.836 13:27:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.836 13:27:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:01.836 [2024-11-17 13:27:50.847503] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:01.836 [2024-11-17 13:27:50.847553] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:01.836 [2024-11-17 13:27:50.847569] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:18:01.836 [2024-11-17 13:27:50.847580] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:01.836 [2024-11-17 13:27:50.849971] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:01.836 [2024-11-17 13:27:50.850011] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:01.836 [2024-11-17 13:27:50.850059] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:01.836 [2024-11-17 13:27:50.850115] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:01.836 [2024-11-17 13:27:50.850180] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:18:01.836 [2024-11-17 13:27:50.850191] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:01.836 [2024-11-17 13:27:50.850303] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:01.836 [2024-11-17 13:27:50.850371] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:18:01.836 [2024-11-17 13:27:50.850379] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:18:01.836 [2024-11-17 13:27:50.850463] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:01.836 pt2 00:18:01.836 13:27:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.836 13:27:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:01.836 13:27:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:01.836 13:27:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:01.836 13:27:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:01.836 13:27:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:01.836 13:27:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:01.836 13:27:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:01.836 13:27:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:01.836 13:27:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:01.836 13:27:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:01.836 13:27:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:01.836 13:27:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:01.836 13:27:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.836 13:27:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:01.836 13:27:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.836 13:27:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:01.836 "name": "raid_bdev1", 00:18:01.836 "uuid": "8b08b486-c4fd-450b-accf-0b6f64e7b188", 00:18:01.836 "strip_size_kb": 0, 00:18:01.836 "state": "online", 00:18:01.836 "raid_level": "raid1", 00:18:01.836 "superblock": true, 00:18:01.836 "num_base_bdevs": 2, 00:18:01.836 "num_base_bdevs_discovered": 1, 00:18:01.836 "num_base_bdevs_operational": 1, 00:18:01.836 "base_bdevs_list": [ 00:18:01.836 { 00:18:01.836 "name": null, 00:18:01.836 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:01.836 "is_configured": false, 00:18:01.836 "data_offset": 256, 00:18:01.836 "data_size": 7936 00:18:01.836 }, 00:18:01.836 { 00:18:01.836 "name": "pt2", 00:18:01.836 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:01.836 "is_configured": true, 00:18:01.836 "data_offset": 256, 00:18:01.836 "data_size": 7936 00:18:01.836 } 00:18:01.836 ] 00:18:01.836 }' 00:18:01.836 13:27:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:01.836 13:27:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:02.096 13:27:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:02.096 13:27:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.096 13:27:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:02.096 [2024-11-17 13:27:51.294705] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:02.096 [2024-11-17 13:27:51.294730] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:02.096 [2024-11-17 13:27:51.294784] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:02.096 [2024-11-17 13:27:51.294827] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:02.096 [2024-11-17 13:27:51.294836] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:18:02.096 13:27:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.096 13:27:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:18:02.096 13:27:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:02.096 13:27:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.096 13:27:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:02.096 13:27:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.357 13:27:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:18:02.357 13:27:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:18:02.357 13:27:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:18:02.357 13:27:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:02.357 13:27:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.357 13:27:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:02.357 [2024-11-17 13:27:51.338671] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:02.357 [2024-11-17 13:27:51.338766] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:02.357 [2024-11-17 13:27:51.338804] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:18:02.357 [2024-11-17 13:27:51.338831] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:02.357 [2024-11-17 13:27:51.341028] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:02.357 [2024-11-17 13:27:51.341095] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:02.357 [2024-11-17 13:27:51.341161] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:02.357 [2024-11-17 13:27:51.341236] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:02.357 [2024-11-17 13:27:51.341362] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:18:02.357 [2024-11-17 13:27:51.341420] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:02.357 [2024-11-17 13:27:51.341458] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:18:02.357 [2024-11-17 13:27:51.341585] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:02.357 [2024-11-17 13:27:51.341695] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:18:02.357 pt1 00:18:02.357 [2024-11-17 13:27:51.341732] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:02.357 [2024-11-17 13:27:51.341808] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:02.357 [2024-11-17 13:27:51.341879] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:18:02.357 [2024-11-17 13:27:51.341891] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:18:02.357 [2024-11-17 13:27:51.341960] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:02.357 13:27:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.357 13:27:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:18:02.357 13:27:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:02.357 13:27:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:02.357 13:27:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:02.357 13:27:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:02.357 13:27:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:02.357 13:27:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:02.357 13:27:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:02.357 13:27:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:02.358 13:27:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:02.358 13:27:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:02.358 13:27:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:02.358 13:27:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:02.358 13:27:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.358 13:27:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:02.358 13:27:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.358 13:27:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:02.358 "name": "raid_bdev1", 00:18:02.358 "uuid": "8b08b486-c4fd-450b-accf-0b6f64e7b188", 00:18:02.358 "strip_size_kb": 0, 00:18:02.358 "state": "online", 00:18:02.358 "raid_level": "raid1", 00:18:02.358 "superblock": true, 00:18:02.358 "num_base_bdevs": 2, 00:18:02.358 "num_base_bdevs_discovered": 1, 00:18:02.358 "num_base_bdevs_operational": 1, 00:18:02.358 "base_bdevs_list": [ 00:18:02.358 { 00:18:02.358 "name": null, 00:18:02.358 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:02.358 "is_configured": false, 00:18:02.358 "data_offset": 256, 00:18:02.358 "data_size": 7936 00:18:02.358 }, 00:18:02.358 { 00:18:02.358 "name": "pt2", 00:18:02.358 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:02.358 "is_configured": true, 00:18:02.358 "data_offset": 256, 00:18:02.358 "data_size": 7936 00:18:02.358 } 00:18:02.358 ] 00:18:02.358 }' 00:18:02.358 13:27:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:02.358 13:27:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:02.618 13:27:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:18:02.618 13:27:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.618 13:27:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:02.618 13:27:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:18:02.618 13:27:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.618 13:27:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:18:02.618 13:27:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:18:02.618 13:27:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:02.618 13:27:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.618 13:27:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:02.618 [2024-11-17 13:27:51.782157] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:02.618 13:27:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.618 13:27:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # '[' 8b08b486-c4fd-450b-accf-0b6f64e7b188 '!=' 8b08b486-c4fd-450b-accf-0b6f64e7b188 ']' 00:18:02.618 13:27:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@563 -- # killprocess 88566 00:18:02.618 13:27:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 88566 ']' 00:18:02.618 13:27:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 88566 00:18:02.618 13:27:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:18:02.618 13:27:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:02.618 13:27:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88566 00:18:02.879 13:27:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:02.879 13:27:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:02.879 killing process with pid 88566 00:18:02.879 13:27:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88566' 00:18:02.879 13:27:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@973 -- # kill 88566 00:18:02.879 [2024-11-17 13:27:51.848252] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:02.879 [2024-11-17 13:27:51.848334] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:02.879 [2024-11-17 13:27:51.848380] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:02.879 [2024-11-17 13:27:51.848395] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:18:02.879 13:27:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@978 -- # wait 88566 00:18:02.879 [2024-11-17 13:27:52.062000] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:04.259 13:27:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@565 -- # return 0 00:18:04.259 00:18:04.259 real 0m6.072s 00:18:04.259 user 0m8.991s 00:18:04.259 sys 0m1.259s 00:18:04.259 13:27:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:04.259 ************************************ 00:18:04.259 END TEST raid_superblock_test_md_interleaved 00:18:04.259 ************************************ 00:18:04.259 13:27:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:04.259 13:27:53 bdev_raid -- bdev/bdev_raid.sh@1013 -- # run_test raid_rebuild_test_sb_md_interleaved raid_rebuild_test raid1 2 true false false 00:18:04.259 13:27:53 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:18:04.259 13:27:53 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:04.259 13:27:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:04.259 ************************************ 00:18:04.259 START TEST raid_rebuild_test_sb_md_interleaved 00:18:04.259 ************************************ 00:18:04.259 13:27:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false false 00:18:04.259 13:27:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:18:04.259 13:27:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:18:04.259 13:27:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:18:04.259 13:27:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:18:04.259 13:27:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # local verify=false 00:18:04.259 13:27:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:18:04.259 13:27:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:04.259 13:27:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:18:04.259 13:27:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:04.259 13:27:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:04.259 13:27:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:18:04.259 13:27:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:04.259 13:27:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:04.259 13:27:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:04.259 13:27:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:18:04.259 13:27:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:18:04.259 13:27:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # local strip_size 00:18:04.259 13:27:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@577 -- # local create_arg 00:18:04.259 13:27:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:18:04.259 13:27:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@579 -- # local data_offset 00:18:04.259 13:27:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:18:04.260 13:27:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:18:04.260 13:27:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:18:04.260 13:27:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:18:04.260 13:27:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@597 -- # raid_pid=88889 00:18:04.260 13:27:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:18:04.260 13:27:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@598 -- # waitforlisten 88889 00:18:04.260 13:27:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 88889 ']' 00:18:04.260 13:27:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:04.260 13:27:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:04.260 13:27:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:04.260 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:04.260 13:27:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:04.260 13:27:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:04.260 [2024-11-17 13:27:53.412336] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:18:04.260 [2024-11-17 13:27:53.412565] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88889 ] 00:18:04.260 I/O size of 3145728 is greater than zero copy threshold (65536). 00:18:04.260 Zero copy mechanism will not be used. 00:18:04.519 [2024-11-17 13:27:53.593163] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:04.519 [2024-11-17 13:27:53.726354] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:04.782 [2024-11-17 13:27:53.947159] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:04.782 [2024-11-17 13:27:53.947293] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:05.053 13:27:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:05.053 13:27:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:18:05.053 13:27:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:05.053 13:27:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1_malloc 00:18:05.053 13:27:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.053 13:27:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:05.341 BaseBdev1_malloc 00:18:05.341 13:27:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.341 13:27:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:05.341 13:27:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.341 13:27:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:05.341 [2024-11-17 13:27:54.285738] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:05.341 [2024-11-17 13:27:54.285804] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:05.341 [2024-11-17 13:27:54.285828] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:05.341 [2024-11-17 13:27:54.285840] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:05.341 [2024-11-17 13:27:54.288015] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:05.341 [2024-11-17 13:27:54.288059] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:05.341 BaseBdev1 00:18:05.341 13:27:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.341 13:27:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:05.341 13:27:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2_malloc 00:18:05.341 13:27:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.341 13:27:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:05.341 BaseBdev2_malloc 00:18:05.341 13:27:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.341 13:27:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:18:05.341 13:27:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.341 13:27:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:05.341 [2024-11-17 13:27:54.347171] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:18:05.341 [2024-11-17 13:27:54.347322] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:05.341 [2024-11-17 13:27:54.347351] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:05.341 [2024-11-17 13:27:54.347366] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:05.341 [2024-11-17 13:27:54.349530] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:05.341 [2024-11-17 13:27:54.349566] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:05.341 BaseBdev2 00:18:05.341 13:27:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.341 13:27:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b spare_malloc 00:18:05.341 13:27:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.341 13:27:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:05.341 spare_malloc 00:18:05.341 13:27:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.341 13:27:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:18:05.341 13:27:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.341 13:27:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:05.341 spare_delay 00:18:05.341 13:27:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.341 13:27:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:05.341 13:27:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.341 13:27:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:05.341 [2024-11-17 13:27:54.453776] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:05.341 [2024-11-17 13:27:54.453833] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:05.341 [2024-11-17 13:27:54.453854] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:18:05.341 [2024-11-17 13:27:54.453865] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:05.341 [2024-11-17 13:27:54.456043] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:05.341 [2024-11-17 13:27:54.456084] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:05.341 spare 00:18:05.341 13:27:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.341 13:27:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:18:05.341 13:27:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.341 13:27:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:05.341 [2024-11-17 13:27:54.465800] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:05.341 [2024-11-17 13:27:54.467916] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:05.341 [2024-11-17 13:27:54.468262] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:05.341 [2024-11-17 13:27:54.468283] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:05.341 [2024-11-17 13:27:54.468368] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:18:05.341 [2024-11-17 13:27:54.468446] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:05.341 [2024-11-17 13:27:54.468454] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:05.341 [2024-11-17 13:27:54.468524] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:05.341 13:27:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.341 13:27:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:05.341 13:27:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:05.341 13:27:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:05.341 13:27:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:05.341 13:27:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:05.341 13:27:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:05.341 13:27:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:05.341 13:27:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:05.341 13:27:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:05.341 13:27:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:05.341 13:27:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:05.341 13:27:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:05.341 13:27:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.341 13:27:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:05.341 13:27:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.341 13:27:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:05.341 "name": "raid_bdev1", 00:18:05.341 "uuid": "a52b7208-41ce-477d-8264-ed2653626238", 00:18:05.341 "strip_size_kb": 0, 00:18:05.341 "state": "online", 00:18:05.341 "raid_level": "raid1", 00:18:05.341 "superblock": true, 00:18:05.341 "num_base_bdevs": 2, 00:18:05.341 "num_base_bdevs_discovered": 2, 00:18:05.341 "num_base_bdevs_operational": 2, 00:18:05.341 "base_bdevs_list": [ 00:18:05.341 { 00:18:05.341 "name": "BaseBdev1", 00:18:05.341 "uuid": "d876b074-dd5e-5736-bd8b-041d100f076f", 00:18:05.341 "is_configured": true, 00:18:05.341 "data_offset": 256, 00:18:05.341 "data_size": 7936 00:18:05.341 }, 00:18:05.341 { 00:18:05.341 "name": "BaseBdev2", 00:18:05.341 "uuid": "09b2349b-c896-5166-91e6-d673b95fbe7e", 00:18:05.341 "is_configured": true, 00:18:05.341 "data_offset": 256, 00:18:05.341 "data_size": 7936 00:18:05.341 } 00:18:05.341 ] 00:18:05.341 }' 00:18:05.341 13:27:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:05.341 13:27:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:05.925 13:27:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:05.925 13:27:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:18:05.925 13:27:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.925 13:27:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:05.925 [2024-11-17 13:27:54.973279] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:05.925 13:27:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.925 13:27:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:18:05.925 13:27:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:05.925 13:27:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:18:05.925 13:27:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.925 13:27:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:05.925 13:27:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.925 13:27:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:18:05.925 13:27:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:18:05.925 13:27:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@624 -- # '[' false = true ']' 00:18:05.925 13:27:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:18:05.925 13:27:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.925 13:27:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:05.925 [2024-11-17 13:27:55.068803] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:05.925 13:27:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.925 13:27:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:05.925 13:27:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:05.925 13:27:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:05.925 13:27:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:05.925 13:27:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:05.925 13:27:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:05.925 13:27:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:05.925 13:27:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:05.925 13:27:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:05.925 13:27:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:05.925 13:27:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:05.925 13:27:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:05.925 13:27:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.925 13:27:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:05.925 13:27:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.925 13:27:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:05.925 "name": "raid_bdev1", 00:18:05.925 "uuid": "a52b7208-41ce-477d-8264-ed2653626238", 00:18:05.925 "strip_size_kb": 0, 00:18:05.925 "state": "online", 00:18:05.925 "raid_level": "raid1", 00:18:05.925 "superblock": true, 00:18:05.925 "num_base_bdevs": 2, 00:18:05.925 "num_base_bdevs_discovered": 1, 00:18:05.925 "num_base_bdevs_operational": 1, 00:18:05.925 "base_bdevs_list": [ 00:18:05.925 { 00:18:05.925 "name": null, 00:18:05.926 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:05.926 "is_configured": false, 00:18:05.926 "data_offset": 0, 00:18:05.926 "data_size": 7936 00:18:05.926 }, 00:18:05.926 { 00:18:05.926 "name": "BaseBdev2", 00:18:05.926 "uuid": "09b2349b-c896-5166-91e6-d673b95fbe7e", 00:18:05.926 "is_configured": true, 00:18:05.926 "data_offset": 256, 00:18:05.926 "data_size": 7936 00:18:05.926 } 00:18:05.926 ] 00:18:05.926 }' 00:18:05.926 13:27:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:05.926 13:27:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:06.495 13:27:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:06.495 13:27:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.495 13:27:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:06.495 [2024-11-17 13:27:55.480113] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:06.495 [2024-11-17 13:27:55.498288] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:06.495 13:27:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.495 13:27:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@647 -- # sleep 1 00:18:06.495 [2024-11-17 13:27:55.500489] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:07.435 13:27:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:07.435 13:27:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:07.435 13:27:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:07.435 13:27:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:07.435 13:27:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:07.435 13:27:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:07.435 13:27:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:07.435 13:27:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.435 13:27:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:07.435 13:27:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.435 13:27:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:07.435 "name": "raid_bdev1", 00:18:07.435 "uuid": "a52b7208-41ce-477d-8264-ed2653626238", 00:18:07.435 "strip_size_kb": 0, 00:18:07.435 "state": "online", 00:18:07.435 "raid_level": "raid1", 00:18:07.436 "superblock": true, 00:18:07.436 "num_base_bdevs": 2, 00:18:07.436 "num_base_bdevs_discovered": 2, 00:18:07.436 "num_base_bdevs_operational": 2, 00:18:07.436 "process": { 00:18:07.436 "type": "rebuild", 00:18:07.436 "target": "spare", 00:18:07.436 "progress": { 00:18:07.436 "blocks": 2560, 00:18:07.436 "percent": 32 00:18:07.436 } 00:18:07.436 }, 00:18:07.436 "base_bdevs_list": [ 00:18:07.436 { 00:18:07.436 "name": "spare", 00:18:07.436 "uuid": "d7a00970-9614-55a5-b732-34f3878a8300", 00:18:07.436 "is_configured": true, 00:18:07.436 "data_offset": 256, 00:18:07.436 "data_size": 7936 00:18:07.436 }, 00:18:07.436 { 00:18:07.436 "name": "BaseBdev2", 00:18:07.436 "uuid": "09b2349b-c896-5166-91e6-d673b95fbe7e", 00:18:07.436 "is_configured": true, 00:18:07.436 "data_offset": 256, 00:18:07.436 "data_size": 7936 00:18:07.436 } 00:18:07.436 ] 00:18:07.436 }' 00:18:07.436 13:27:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:07.436 13:27:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:07.436 13:27:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:07.696 13:27:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:07.696 13:27:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:07.696 13:27:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.696 13:27:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:07.696 [2024-11-17 13:27:56.671771] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:07.696 [2024-11-17 13:27:56.709447] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:07.696 [2024-11-17 13:27:56.709568] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:07.696 [2024-11-17 13:27:56.709604] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:07.696 [2024-11-17 13:27:56.709634] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:07.696 13:27:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.696 13:27:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:07.696 13:27:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:07.696 13:27:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:07.696 13:27:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:07.696 13:27:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:07.696 13:27:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:07.696 13:27:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:07.696 13:27:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:07.696 13:27:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:07.696 13:27:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:07.696 13:27:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:07.696 13:27:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.696 13:27:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:07.696 13:27:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:07.696 13:27:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.696 13:27:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:07.696 "name": "raid_bdev1", 00:18:07.696 "uuid": "a52b7208-41ce-477d-8264-ed2653626238", 00:18:07.696 "strip_size_kb": 0, 00:18:07.696 "state": "online", 00:18:07.696 "raid_level": "raid1", 00:18:07.696 "superblock": true, 00:18:07.696 "num_base_bdevs": 2, 00:18:07.696 "num_base_bdevs_discovered": 1, 00:18:07.696 "num_base_bdevs_operational": 1, 00:18:07.696 "base_bdevs_list": [ 00:18:07.696 { 00:18:07.696 "name": null, 00:18:07.696 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:07.696 "is_configured": false, 00:18:07.696 "data_offset": 0, 00:18:07.696 "data_size": 7936 00:18:07.696 }, 00:18:07.696 { 00:18:07.696 "name": "BaseBdev2", 00:18:07.696 "uuid": "09b2349b-c896-5166-91e6-d673b95fbe7e", 00:18:07.696 "is_configured": true, 00:18:07.696 "data_offset": 256, 00:18:07.696 "data_size": 7936 00:18:07.696 } 00:18:07.696 ] 00:18:07.696 }' 00:18:07.696 13:27:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:07.696 13:27:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:08.267 13:27:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:08.267 13:27:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:08.267 13:27:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:08.267 13:27:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:08.267 13:27:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:08.267 13:27:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:08.267 13:27:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.267 13:27:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:08.267 13:27:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:08.267 13:27:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.267 13:27:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:08.267 "name": "raid_bdev1", 00:18:08.267 "uuid": "a52b7208-41ce-477d-8264-ed2653626238", 00:18:08.267 "strip_size_kb": 0, 00:18:08.267 "state": "online", 00:18:08.267 "raid_level": "raid1", 00:18:08.267 "superblock": true, 00:18:08.267 "num_base_bdevs": 2, 00:18:08.267 "num_base_bdevs_discovered": 1, 00:18:08.267 "num_base_bdevs_operational": 1, 00:18:08.267 "base_bdevs_list": [ 00:18:08.267 { 00:18:08.267 "name": null, 00:18:08.267 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:08.267 "is_configured": false, 00:18:08.267 "data_offset": 0, 00:18:08.267 "data_size": 7936 00:18:08.267 }, 00:18:08.267 { 00:18:08.267 "name": "BaseBdev2", 00:18:08.267 "uuid": "09b2349b-c896-5166-91e6-d673b95fbe7e", 00:18:08.267 "is_configured": true, 00:18:08.267 "data_offset": 256, 00:18:08.267 "data_size": 7936 00:18:08.267 } 00:18:08.267 ] 00:18:08.267 }' 00:18:08.267 13:27:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:08.267 13:27:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:08.267 13:27:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:08.267 13:27:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:08.267 13:27:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:08.267 13:27:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.267 13:27:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:08.267 [2024-11-17 13:27:57.362106] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:08.267 [2024-11-17 13:27:57.379008] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:08.267 13:27:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.267 13:27:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@663 -- # sleep 1 00:18:08.267 [2024-11-17 13:27:57.381183] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:09.208 13:27:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:09.208 13:27:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:09.208 13:27:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:09.208 13:27:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:09.208 13:27:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:09.208 13:27:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:09.208 13:27:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.208 13:27:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:09.208 13:27:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:09.208 13:27:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.468 13:27:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:09.468 "name": "raid_bdev1", 00:18:09.468 "uuid": "a52b7208-41ce-477d-8264-ed2653626238", 00:18:09.468 "strip_size_kb": 0, 00:18:09.468 "state": "online", 00:18:09.468 "raid_level": "raid1", 00:18:09.468 "superblock": true, 00:18:09.468 "num_base_bdevs": 2, 00:18:09.468 "num_base_bdevs_discovered": 2, 00:18:09.468 "num_base_bdevs_operational": 2, 00:18:09.468 "process": { 00:18:09.468 "type": "rebuild", 00:18:09.468 "target": "spare", 00:18:09.468 "progress": { 00:18:09.468 "blocks": 2560, 00:18:09.468 "percent": 32 00:18:09.468 } 00:18:09.468 }, 00:18:09.468 "base_bdevs_list": [ 00:18:09.468 { 00:18:09.468 "name": "spare", 00:18:09.468 "uuid": "d7a00970-9614-55a5-b732-34f3878a8300", 00:18:09.468 "is_configured": true, 00:18:09.468 "data_offset": 256, 00:18:09.468 "data_size": 7936 00:18:09.468 }, 00:18:09.468 { 00:18:09.468 "name": "BaseBdev2", 00:18:09.468 "uuid": "09b2349b-c896-5166-91e6-d673b95fbe7e", 00:18:09.468 "is_configured": true, 00:18:09.468 "data_offset": 256, 00:18:09.468 "data_size": 7936 00:18:09.468 } 00:18:09.468 ] 00:18:09.468 }' 00:18:09.468 13:27:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:09.468 13:27:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:09.468 13:27:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:09.468 13:27:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:09.468 13:27:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:18:09.468 13:27:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:18:09.468 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:18:09.468 13:27:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:18:09.468 13:27:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:18:09.468 13:27:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:18:09.468 13:27:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # local timeout=728 00:18:09.468 13:27:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:09.468 13:27:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:09.468 13:27:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:09.468 13:27:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:09.468 13:27:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:09.469 13:27:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:09.469 13:27:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:09.469 13:27:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:09.469 13:27:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.469 13:27:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:09.469 13:27:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.469 13:27:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:09.469 "name": "raid_bdev1", 00:18:09.469 "uuid": "a52b7208-41ce-477d-8264-ed2653626238", 00:18:09.469 "strip_size_kb": 0, 00:18:09.469 "state": "online", 00:18:09.469 "raid_level": "raid1", 00:18:09.469 "superblock": true, 00:18:09.469 "num_base_bdevs": 2, 00:18:09.469 "num_base_bdevs_discovered": 2, 00:18:09.469 "num_base_bdevs_operational": 2, 00:18:09.469 "process": { 00:18:09.469 "type": "rebuild", 00:18:09.469 "target": "spare", 00:18:09.469 "progress": { 00:18:09.469 "blocks": 2816, 00:18:09.469 "percent": 35 00:18:09.469 } 00:18:09.469 }, 00:18:09.469 "base_bdevs_list": [ 00:18:09.469 { 00:18:09.469 "name": "spare", 00:18:09.469 "uuid": "d7a00970-9614-55a5-b732-34f3878a8300", 00:18:09.469 "is_configured": true, 00:18:09.469 "data_offset": 256, 00:18:09.469 "data_size": 7936 00:18:09.469 }, 00:18:09.469 { 00:18:09.469 "name": "BaseBdev2", 00:18:09.469 "uuid": "09b2349b-c896-5166-91e6-d673b95fbe7e", 00:18:09.469 "is_configured": true, 00:18:09.469 "data_offset": 256, 00:18:09.469 "data_size": 7936 00:18:09.469 } 00:18:09.469 ] 00:18:09.469 }' 00:18:09.469 13:27:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:09.469 13:27:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:09.469 13:27:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:09.469 13:27:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:09.469 13:27:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:10.852 13:27:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:10.852 13:27:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:10.852 13:27:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:10.852 13:27:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:10.852 13:27:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:10.852 13:27:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:10.852 13:27:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:10.852 13:27:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:10.852 13:27:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.852 13:27:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:10.852 13:27:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.852 13:27:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:10.852 "name": "raid_bdev1", 00:18:10.852 "uuid": "a52b7208-41ce-477d-8264-ed2653626238", 00:18:10.852 "strip_size_kb": 0, 00:18:10.852 "state": "online", 00:18:10.852 "raid_level": "raid1", 00:18:10.852 "superblock": true, 00:18:10.852 "num_base_bdevs": 2, 00:18:10.852 "num_base_bdevs_discovered": 2, 00:18:10.852 "num_base_bdevs_operational": 2, 00:18:10.852 "process": { 00:18:10.852 "type": "rebuild", 00:18:10.852 "target": "spare", 00:18:10.852 "progress": { 00:18:10.852 "blocks": 5632, 00:18:10.852 "percent": 70 00:18:10.852 } 00:18:10.852 }, 00:18:10.852 "base_bdevs_list": [ 00:18:10.852 { 00:18:10.852 "name": "spare", 00:18:10.852 "uuid": "d7a00970-9614-55a5-b732-34f3878a8300", 00:18:10.852 "is_configured": true, 00:18:10.852 "data_offset": 256, 00:18:10.852 "data_size": 7936 00:18:10.852 }, 00:18:10.852 { 00:18:10.852 "name": "BaseBdev2", 00:18:10.852 "uuid": "09b2349b-c896-5166-91e6-d673b95fbe7e", 00:18:10.852 "is_configured": true, 00:18:10.852 "data_offset": 256, 00:18:10.852 "data_size": 7936 00:18:10.852 } 00:18:10.852 ] 00:18:10.852 }' 00:18:10.852 13:27:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:10.852 13:27:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:10.852 13:27:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:10.852 13:27:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:10.852 13:27:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:11.423 [2024-11-17 13:28:00.503314] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:18:11.423 [2024-11-17 13:28:00.503437] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:18:11.423 [2024-11-17 13:28:00.503606] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:11.682 13:28:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:11.683 13:28:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:11.683 13:28:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:11.683 13:28:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:11.683 13:28:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:11.683 13:28:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:11.683 13:28:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:11.683 13:28:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:11.683 13:28:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.683 13:28:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:11.683 13:28:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.683 13:28:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:11.683 "name": "raid_bdev1", 00:18:11.683 "uuid": "a52b7208-41ce-477d-8264-ed2653626238", 00:18:11.683 "strip_size_kb": 0, 00:18:11.683 "state": "online", 00:18:11.683 "raid_level": "raid1", 00:18:11.683 "superblock": true, 00:18:11.683 "num_base_bdevs": 2, 00:18:11.683 "num_base_bdevs_discovered": 2, 00:18:11.683 "num_base_bdevs_operational": 2, 00:18:11.683 "base_bdevs_list": [ 00:18:11.683 { 00:18:11.683 "name": "spare", 00:18:11.683 "uuid": "d7a00970-9614-55a5-b732-34f3878a8300", 00:18:11.683 "is_configured": true, 00:18:11.683 "data_offset": 256, 00:18:11.683 "data_size": 7936 00:18:11.683 }, 00:18:11.683 { 00:18:11.683 "name": "BaseBdev2", 00:18:11.683 "uuid": "09b2349b-c896-5166-91e6-d673b95fbe7e", 00:18:11.683 "is_configured": true, 00:18:11.683 "data_offset": 256, 00:18:11.683 "data_size": 7936 00:18:11.683 } 00:18:11.683 ] 00:18:11.683 }' 00:18:11.683 13:28:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:11.943 13:28:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:18:11.943 13:28:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:11.943 13:28:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:18:11.943 13:28:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@709 -- # break 00:18:11.943 13:28:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:11.943 13:28:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:11.943 13:28:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:11.943 13:28:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:11.943 13:28:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:11.943 13:28:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:11.943 13:28:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.943 13:28:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:11.943 13:28:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:11.943 13:28:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.943 13:28:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:11.943 "name": "raid_bdev1", 00:18:11.943 "uuid": "a52b7208-41ce-477d-8264-ed2653626238", 00:18:11.943 "strip_size_kb": 0, 00:18:11.943 "state": "online", 00:18:11.943 "raid_level": "raid1", 00:18:11.943 "superblock": true, 00:18:11.943 "num_base_bdevs": 2, 00:18:11.943 "num_base_bdevs_discovered": 2, 00:18:11.943 "num_base_bdevs_operational": 2, 00:18:11.943 "base_bdevs_list": [ 00:18:11.943 { 00:18:11.943 "name": "spare", 00:18:11.943 "uuid": "d7a00970-9614-55a5-b732-34f3878a8300", 00:18:11.943 "is_configured": true, 00:18:11.943 "data_offset": 256, 00:18:11.943 "data_size": 7936 00:18:11.943 }, 00:18:11.943 { 00:18:11.943 "name": "BaseBdev2", 00:18:11.943 "uuid": "09b2349b-c896-5166-91e6-d673b95fbe7e", 00:18:11.943 "is_configured": true, 00:18:11.943 "data_offset": 256, 00:18:11.943 "data_size": 7936 00:18:11.943 } 00:18:11.943 ] 00:18:11.943 }' 00:18:11.943 13:28:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:11.943 13:28:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:11.943 13:28:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:11.943 13:28:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:11.943 13:28:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:11.943 13:28:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:11.943 13:28:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:11.943 13:28:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:11.943 13:28:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:11.943 13:28:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:11.943 13:28:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:11.943 13:28:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:11.943 13:28:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:11.943 13:28:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:11.943 13:28:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:11.943 13:28:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.943 13:28:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:11.943 13:28:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:11.943 13:28:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.943 13:28:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:11.943 "name": "raid_bdev1", 00:18:11.943 "uuid": "a52b7208-41ce-477d-8264-ed2653626238", 00:18:11.943 "strip_size_kb": 0, 00:18:11.943 "state": "online", 00:18:11.943 "raid_level": "raid1", 00:18:11.943 "superblock": true, 00:18:11.943 "num_base_bdevs": 2, 00:18:11.943 "num_base_bdevs_discovered": 2, 00:18:11.943 "num_base_bdevs_operational": 2, 00:18:11.943 "base_bdevs_list": [ 00:18:11.943 { 00:18:11.943 "name": "spare", 00:18:11.943 "uuid": "d7a00970-9614-55a5-b732-34f3878a8300", 00:18:11.943 "is_configured": true, 00:18:11.943 "data_offset": 256, 00:18:11.944 "data_size": 7936 00:18:11.944 }, 00:18:11.944 { 00:18:11.944 "name": "BaseBdev2", 00:18:11.944 "uuid": "09b2349b-c896-5166-91e6-d673b95fbe7e", 00:18:11.944 "is_configured": true, 00:18:11.944 "data_offset": 256, 00:18:11.944 "data_size": 7936 00:18:11.944 } 00:18:11.944 ] 00:18:11.944 }' 00:18:11.944 13:28:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:11.944 13:28:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:12.514 13:28:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:12.514 13:28:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.514 13:28:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:12.514 [2024-11-17 13:28:01.561548] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:12.514 [2024-11-17 13:28:01.561626] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:12.514 [2024-11-17 13:28:01.561736] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:12.514 [2024-11-17 13:28:01.561808] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:12.514 [2024-11-17 13:28:01.561821] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:12.514 13:28:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.514 13:28:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:12.514 13:28:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # jq length 00:18:12.514 13:28:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.514 13:28:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:12.514 13:28:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.514 13:28:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:18:12.514 13:28:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:18:12.514 13:28:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:18:12.514 13:28:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:18:12.514 13:28:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.514 13:28:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:12.514 13:28:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.514 13:28:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:12.514 13:28:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.514 13:28:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:12.514 [2024-11-17 13:28:01.633422] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:12.514 [2024-11-17 13:28:01.633477] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:12.514 [2024-11-17 13:28:01.633501] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:18:12.514 [2024-11-17 13:28:01.633511] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:12.514 [2024-11-17 13:28:01.635787] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:12.514 [2024-11-17 13:28:01.635823] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:12.514 [2024-11-17 13:28:01.635879] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:12.514 [2024-11-17 13:28:01.635942] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:12.514 [2024-11-17 13:28:01.636061] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:12.514 spare 00:18:12.514 13:28:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.514 13:28:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:18:12.514 13:28:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.514 13:28:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:12.515 [2024-11-17 13:28:01.735966] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:18:12.515 [2024-11-17 13:28:01.736035] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:12.515 [2024-11-17 13:28:01.736151] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:18:12.515 [2024-11-17 13:28:01.736290] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:18:12.515 [2024-11-17 13:28:01.736301] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:18:12.515 [2024-11-17 13:28:01.736384] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:12.774 13:28:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.774 13:28:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:12.774 13:28:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:12.774 13:28:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:12.774 13:28:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:12.774 13:28:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:12.774 13:28:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:12.774 13:28:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:12.774 13:28:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:12.774 13:28:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:12.774 13:28:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:12.774 13:28:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:12.774 13:28:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:12.774 13:28:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.774 13:28:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:12.774 13:28:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.774 13:28:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:12.774 "name": "raid_bdev1", 00:18:12.774 "uuid": "a52b7208-41ce-477d-8264-ed2653626238", 00:18:12.774 "strip_size_kb": 0, 00:18:12.774 "state": "online", 00:18:12.774 "raid_level": "raid1", 00:18:12.774 "superblock": true, 00:18:12.774 "num_base_bdevs": 2, 00:18:12.774 "num_base_bdevs_discovered": 2, 00:18:12.774 "num_base_bdevs_operational": 2, 00:18:12.774 "base_bdevs_list": [ 00:18:12.774 { 00:18:12.774 "name": "spare", 00:18:12.774 "uuid": "d7a00970-9614-55a5-b732-34f3878a8300", 00:18:12.774 "is_configured": true, 00:18:12.774 "data_offset": 256, 00:18:12.774 "data_size": 7936 00:18:12.774 }, 00:18:12.774 { 00:18:12.774 "name": "BaseBdev2", 00:18:12.774 "uuid": "09b2349b-c896-5166-91e6-d673b95fbe7e", 00:18:12.774 "is_configured": true, 00:18:12.774 "data_offset": 256, 00:18:12.774 "data_size": 7936 00:18:12.774 } 00:18:12.774 ] 00:18:12.774 }' 00:18:12.774 13:28:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:12.774 13:28:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:13.034 13:28:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:13.034 13:28:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:13.034 13:28:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:13.034 13:28:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:13.034 13:28:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:13.034 13:28:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:13.034 13:28:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:13.034 13:28:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.034 13:28:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:13.034 13:28:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.034 13:28:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:13.034 "name": "raid_bdev1", 00:18:13.034 "uuid": "a52b7208-41ce-477d-8264-ed2653626238", 00:18:13.034 "strip_size_kb": 0, 00:18:13.034 "state": "online", 00:18:13.034 "raid_level": "raid1", 00:18:13.034 "superblock": true, 00:18:13.034 "num_base_bdevs": 2, 00:18:13.034 "num_base_bdevs_discovered": 2, 00:18:13.034 "num_base_bdevs_operational": 2, 00:18:13.034 "base_bdevs_list": [ 00:18:13.034 { 00:18:13.034 "name": "spare", 00:18:13.034 "uuid": "d7a00970-9614-55a5-b732-34f3878a8300", 00:18:13.034 "is_configured": true, 00:18:13.034 "data_offset": 256, 00:18:13.034 "data_size": 7936 00:18:13.034 }, 00:18:13.034 { 00:18:13.034 "name": "BaseBdev2", 00:18:13.034 "uuid": "09b2349b-c896-5166-91e6-d673b95fbe7e", 00:18:13.034 "is_configured": true, 00:18:13.034 "data_offset": 256, 00:18:13.034 "data_size": 7936 00:18:13.034 } 00:18:13.034 ] 00:18:13.034 }' 00:18:13.034 13:28:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:13.034 13:28:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:13.034 13:28:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:13.294 13:28:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:13.294 13:28:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:18:13.294 13:28:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:13.294 13:28:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.294 13:28:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:13.294 13:28:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.294 13:28:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:18:13.294 13:28:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:13.294 13:28:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.294 13:28:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:13.294 [2024-11-17 13:28:02.332272] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:13.294 13:28:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.294 13:28:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:13.294 13:28:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:13.294 13:28:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:13.294 13:28:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:13.294 13:28:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:13.294 13:28:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:13.294 13:28:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:13.294 13:28:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:13.294 13:28:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:13.294 13:28:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:13.294 13:28:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:13.294 13:28:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.294 13:28:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:13.294 13:28:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:13.294 13:28:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.294 13:28:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:13.294 "name": "raid_bdev1", 00:18:13.294 "uuid": "a52b7208-41ce-477d-8264-ed2653626238", 00:18:13.294 "strip_size_kb": 0, 00:18:13.294 "state": "online", 00:18:13.294 "raid_level": "raid1", 00:18:13.294 "superblock": true, 00:18:13.294 "num_base_bdevs": 2, 00:18:13.294 "num_base_bdevs_discovered": 1, 00:18:13.294 "num_base_bdevs_operational": 1, 00:18:13.294 "base_bdevs_list": [ 00:18:13.294 { 00:18:13.294 "name": null, 00:18:13.294 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:13.294 "is_configured": false, 00:18:13.294 "data_offset": 0, 00:18:13.294 "data_size": 7936 00:18:13.294 }, 00:18:13.294 { 00:18:13.294 "name": "BaseBdev2", 00:18:13.294 "uuid": "09b2349b-c896-5166-91e6-d673b95fbe7e", 00:18:13.294 "is_configured": true, 00:18:13.294 "data_offset": 256, 00:18:13.294 "data_size": 7936 00:18:13.294 } 00:18:13.294 ] 00:18:13.294 }' 00:18:13.294 13:28:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:13.294 13:28:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:13.865 13:28:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:13.865 13:28:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.865 13:28:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:13.865 [2024-11-17 13:28:02.819426] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:13.865 [2024-11-17 13:28:02.819654] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:13.865 [2024-11-17 13:28:02.819719] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:13.865 [2024-11-17 13:28:02.819798] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:13.866 [2024-11-17 13:28:02.836128] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:18:13.866 13:28:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.866 13:28:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@757 -- # sleep 1 00:18:13.866 [2024-11-17 13:28:02.838382] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:14.806 13:28:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:14.806 13:28:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:14.806 13:28:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:14.806 13:28:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:14.806 13:28:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:14.806 13:28:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:14.806 13:28:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:14.806 13:28:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.806 13:28:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:14.806 13:28:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.806 13:28:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:14.806 "name": "raid_bdev1", 00:18:14.806 "uuid": "a52b7208-41ce-477d-8264-ed2653626238", 00:18:14.806 "strip_size_kb": 0, 00:18:14.806 "state": "online", 00:18:14.806 "raid_level": "raid1", 00:18:14.806 "superblock": true, 00:18:14.806 "num_base_bdevs": 2, 00:18:14.806 "num_base_bdevs_discovered": 2, 00:18:14.806 "num_base_bdevs_operational": 2, 00:18:14.806 "process": { 00:18:14.806 "type": "rebuild", 00:18:14.806 "target": "spare", 00:18:14.806 "progress": { 00:18:14.806 "blocks": 2560, 00:18:14.806 "percent": 32 00:18:14.806 } 00:18:14.806 }, 00:18:14.806 "base_bdevs_list": [ 00:18:14.806 { 00:18:14.806 "name": "spare", 00:18:14.806 "uuid": "d7a00970-9614-55a5-b732-34f3878a8300", 00:18:14.806 "is_configured": true, 00:18:14.806 "data_offset": 256, 00:18:14.806 "data_size": 7936 00:18:14.806 }, 00:18:14.806 { 00:18:14.806 "name": "BaseBdev2", 00:18:14.806 "uuid": "09b2349b-c896-5166-91e6-d673b95fbe7e", 00:18:14.806 "is_configured": true, 00:18:14.806 "data_offset": 256, 00:18:14.806 "data_size": 7936 00:18:14.806 } 00:18:14.806 ] 00:18:14.806 }' 00:18:14.806 13:28:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:14.806 13:28:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:14.806 13:28:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:14.806 13:28:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:14.806 13:28:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:18:14.806 13:28:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.806 13:28:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:14.806 [2024-11-17 13:28:04.002481] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:15.066 [2024-11-17 13:28:04.047188] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:15.066 [2024-11-17 13:28:04.047267] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:15.066 [2024-11-17 13:28:04.047283] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:15.066 [2024-11-17 13:28:04.047294] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:15.066 13:28:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.066 13:28:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:15.066 13:28:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:15.066 13:28:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:15.066 13:28:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:15.066 13:28:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:15.066 13:28:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:15.066 13:28:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:15.066 13:28:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:15.066 13:28:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:15.066 13:28:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:15.066 13:28:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:15.066 13:28:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:15.066 13:28:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.066 13:28:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:15.066 13:28:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.066 13:28:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:15.066 "name": "raid_bdev1", 00:18:15.066 "uuid": "a52b7208-41ce-477d-8264-ed2653626238", 00:18:15.066 "strip_size_kb": 0, 00:18:15.066 "state": "online", 00:18:15.066 "raid_level": "raid1", 00:18:15.066 "superblock": true, 00:18:15.066 "num_base_bdevs": 2, 00:18:15.066 "num_base_bdevs_discovered": 1, 00:18:15.066 "num_base_bdevs_operational": 1, 00:18:15.066 "base_bdevs_list": [ 00:18:15.066 { 00:18:15.066 "name": null, 00:18:15.066 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:15.066 "is_configured": false, 00:18:15.066 "data_offset": 0, 00:18:15.066 "data_size": 7936 00:18:15.066 }, 00:18:15.066 { 00:18:15.066 "name": "BaseBdev2", 00:18:15.066 "uuid": "09b2349b-c896-5166-91e6-d673b95fbe7e", 00:18:15.066 "is_configured": true, 00:18:15.066 "data_offset": 256, 00:18:15.066 "data_size": 7936 00:18:15.066 } 00:18:15.066 ] 00:18:15.066 }' 00:18:15.066 13:28:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:15.066 13:28:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:15.327 13:28:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:15.327 13:28:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.327 13:28:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:15.327 [2024-11-17 13:28:04.527366] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:15.327 [2024-11-17 13:28:04.527487] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:15.327 [2024-11-17 13:28:04.527531] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:18:15.327 [2024-11-17 13:28:04.527562] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:15.327 [2024-11-17 13:28:04.527840] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:15.327 [2024-11-17 13:28:04.527902] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:15.327 [2024-11-17 13:28:04.528005] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:15.327 [2024-11-17 13:28:04.528047] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:15.327 [2024-11-17 13:28:04.528093] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:15.327 [2024-11-17 13:28:04.528164] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:15.327 [2024-11-17 13:28:04.544950] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:18:15.327 spare 00:18:15.327 13:28:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.327 13:28:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@764 -- # sleep 1 00:18:15.327 [2024-11-17 13:28:04.547196] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:16.710 13:28:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:16.710 13:28:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:16.710 13:28:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:16.710 13:28:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:16.710 13:28:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:16.710 13:28:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:16.710 13:28:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:16.710 13:28:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.710 13:28:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:16.710 13:28:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.710 13:28:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:16.710 "name": "raid_bdev1", 00:18:16.710 "uuid": "a52b7208-41ce-477d-8264-ed2653626238", 00:18:16.710 "strip_size_kb": 0, 00:18:16.710 "state": "online", 00:18:16.710 "raid_level": "raid1", 00:18:16.710 "superblock": true, 00:18:16.710 "num_base_bdevs": 2, 00:18:16.710 "num_base_bdevs_discovered": 2, 00:18:16.710 "num_base_bdevs_operational": 2, 00:18:16.710 "process": { 00:18:16.710 "type": "rebuild", 00:18:16.710 "target": "spare", 00:18:16.710 "progress": { 00:18:16.710 "blocks": 2560, 00:18:16.710 "percent": 32 00:18:16.710 } 00:18:16.710 }, 00:18:16.710 "base_bdevs_list": [ 00:18:16.710 { 00:18:16.710 "name": "spare", 00:18:16.710 "uuid": "d7a00970-9614-55a5-b732-34f3878a8300", 00:18:16.710 "is_configured": true, 00:18:16.710 "data_offset": 256, 00:18:16.710 "data_size": 7936 00:18:16.710 }, 00:18:16.710 { 00:18:16.710 "name": "BaseBdev2", 00:18:16.710 "uuid": "09b2349b-c896-5166-91e6-d673b95fbe7e", 00:18:16.710 "is_configured": true, 00:18:16.710 "data_offset": 256, 00:18:16.710 "data_size": 7936 00:18:16.710 } 00:18:16.710 ] 00:18:16.710 }' 00:18:16.710 13:28:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:16.710 13:28:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:16.710 13:28:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:16.710 13:28:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:16.710 13:28:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:18:16.710 13:28:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.710 13:28:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:16.710 [2024-11-17 13:28:05.708065] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:16.710 [2024-11-17 13:28:05.755921] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:16.710 [2024-11-17 13:28:05.755975] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:16.710 [2024-11-17 13:28:05.755994] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:16.710 [2024-11-17 13:28:05.756001] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:16.710 13:28:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.710 13:28:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:16.710 13:28:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:16.710 13:28:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:16.710 13:28:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:16.710 13:28:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:16.710 13:28:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:16.710 13:28:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:16.710 13:28:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:16.710 13:28:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:16.710 13:28:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:16.710 13:28:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:16.710 13:28:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:16.710 13:28:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.710 13:28:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:16.710 13:28:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.710 13:28:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:16.710 "name": "raid_bdev1", 00:18:16.710 "uuid": "a52b7208-41ce-477d-8264-ed2653626238", 00:18:16.710 "strip_size_kb": 0, 00:18:16.710 "state": "online", 00:18:16.710 "raid_level": "raid1", 00:18:16.710 "superblock": true, 00:18:16.710 "num_base_bdevs": 2, 00:18:16.710 "num_base_bdevs_discovered": 1, 00:18:16.710 "num_base_bdevs_operational": 1, 00:18:16.710 "base_bdevs_list": [ 00:18:16.710 { 00:18:16.710 "name": null, 00:18:16.710 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:16.710 "is_configured": false, 00:18:16.710 "data_offset": 0, 00:18:16.710 "data_size": 7936 00:18:16.710 }, 00:18:16.710 { 00:18:16.710 "name": "BaseBdev2", 00:18:16.710 "uuid": "09b2349b-c896-5166-91e6-d673b95fbe7e", 00:18:16.710 "is_configured": true, 00:18:16.710 "data_offset": 256, 00:18:16.710 "data_size": 7936 00:18:16.710 } 00:18:16.710 ] 00:18:16.710 }' 00:18:16.710 13:28:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:16.710 13:28:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:17.281 13:28:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:17.281 13:28:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:17.281 13:28:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:17.281 13:28:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:17.281 13:28:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:17.281 13:28:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:17.281 13:28:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:17.281 13:28:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.281 13:28:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:17.281 13:28:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.281 13:28:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:17.281 "name": "raid_bdev1", 00:18:17.281 "uuid": "a52b7208-41ce-477d-8264-ed2653626238", 00:18:17.281 "strip_size_kb": 0, 00:18:17.281 "state": "online", 00:18:17.281 "raid_level": "raid1", 00:18:17.281 "superblock": true, 00:18:17.281 "num_base_bdevs": 2, 00:18:17.281 "num_base_bdevs_discovered": 1, 00:18:17.281 "num_base_bdevs_operational": 1, 00:18:17.281 "base_bdevs_list": [ 00:18:17.281 { 00:18:17.281 "name": null, 00:18:17.281 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:17.281 "is_configured": false, 00:18:17.281 "data_offset": 0, 00:18:17.281 "data_size": 7936 00:18:17.281 }, 00:18:17.281 { 00:18:17.281 "name": "BaseBdev2", 00:18:17.281 "uuid": "09b2349b-c896-5166-91e6-d673b95fbe7e", 00:18:17.281 "is_configured": true, 00:18:17.281 "data_offset": 256, 00:18:17.281 "data_size": 7936 00:18:17.281 } 00:18:17.281 ] 00:18:17.281 }' 00:18:17.281 13:28:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:17.281 13:28:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:17.281 13:28:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:17.281 13:28:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:17.281 13:28:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:18:17.281 13:28:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.281 13:28:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:17.281 13:28:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.281 13:28:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:17.281 13:28:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.281 13:28:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:17.281 [2024-11-17 13:28:06.427021] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:17.281 [2024-11-17 13:28:06.427121] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:17.281 [2024-11-17 13:28:06.427163] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:18:17.281 [2024-11-17 13:28:06.427191] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:17.281 [2024-11-17 13:28:06.427433] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:17.281 [2024-11-17 13:28:06.427478] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:17.281 [2024-11-17 13:28:06.427572] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:18:17.281 [2024-11-17 13:28:06.427610] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:17.281 [2024-11-17 13:28:06.427624] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:17.281 [2024-11-17 13:28:06.427636] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:18:17.281 BaseBdev1 00:18:17.281 13:28:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.281 13:28:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@775 -- # sleep 1 00:18:18.221 13:28:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:18.221 13:28:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:18.221 13:28:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:18.221 13:28:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:18.221 13:28:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:18.221 13:28:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:18.221 13:28:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:18.221 13:28:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:18.221 13:28:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:18.221 13:28:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:18.221 13:28:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:18.221 13:28:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.221 13:28:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:18.221 13:28:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:18.481 13:28:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.481 13:28:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:18.481 "name": "raid_bdev1", 00:18:18.481 "uuid": "a52b7208-41ce-477d-8264-ed2653626238", 00:18:18.481 "strip_size_kb": 0, 00:18:18.481 "state": "online", 00:18:18.481 "raid_level": "raid1", 00:18:18.481 "superblock": true, 00:18:18.481 "num_base_bdevs": 2, 00:18:18.481 "num_base_bdevs_discovered": 1, 00:18:18.481 "num_base_bdevs_operational": 1, 00:18:18.481 "base_bdevs_list": [ 00:18:18.481 { 00:18:18.481 "name": null, 00:18:18.481 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:18.481 "is_configured": false, 00:18:18.481 "data_offset": 0, 00:18:18.481 "data_size": 7936 00:18:18.481 }, 00:18:18.481 { 00:18:18.481 "name": "BaseBdev2", 00:18:18.481 "uuid": "09b2349b-c896-5166-91e6-d673b95fbe7e", 00:18:18.481 "is_configured": true, 00:18:18.481 "data_offset": 256, 00:18:18.481 "data_size": 7936 00:18:18.481 } 00:18:18.482 ] 00:18:18.482 }' 00:18:18.482 13:28:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:18.482 13:28:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:18.742 13:28:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:18.742 13:28:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:18.742 13:28:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:18.742 13:28:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:18.742 13:28:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:18.742 13:28:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:18.742 13:28:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:18.742 13:28:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.742 13:28:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:18.742 13:28:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.742 13:28:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:18.742 "name": "raid_bdev1", 00:18:18.742 "uuid": "a52b7208-41ce-477d-8264-ed2653626238", 00:18:18.742 "strip_size_kb": 0, 00:18:18.742 "state": "online", 00:18:18.742 "raid_level": "raid1", 00:18:18.742 "superblock": true, 00:18:18.742 "num_base_bdevs": 2, 00:18:18.742 "num_base_bdevs_discovered": 1, 00:18:18.742 "num_base_bdevs_operational": 1, 00:18:18.742 "base_bdevs_list": [ 00:18:18.742 { 00:18:18.742 "name": null, 00:18:18.742 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:18.742 "is_configured": false, 00:18:18.742 "data_offset": 0, 00:18:18.742 "data_size": 7936 00:18:18.742 }, 00:18:18.742 { 00:18:18.742 "name": "BaseBdev2", 00:18:18.742 "uuid": "09b2349b-c896-5166-91e6-d673b95fbe7e", 00:18:18.742 "is_configured": true, 00:18:18.742 "data_offset": 256, 00:18:18.742 "data_size": 7936 00:18:18.742 } 00:18:18.742 ] 00:18:18.742 }' 00:18:18.742 13:28:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:19.003 13:28:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:19.003 13:28:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:19.003 13:28:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:19.003 13:28:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:19.003 13:28:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@652 -- # local es=0 00:18:19.003 13:28:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:19.003 13:28:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:19.003 13:28:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:19.003 13:28:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:19.003 13:28:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:19.003 13:28:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:19.003 13:28:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.003 13:28:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:19.003 [2024-11-17 13:28:08.048289] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:19.003 [2024-11-17 13:28:08.048513] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:19.003 [2024-11-17 13:28:08.048577] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:19.003 request: 00:18:19.003 { 00:18:19.003 "base_bdev": "BaseBdev1", 00:18:19.003 "raid_bdev": "raid_bdev1", 00:18:19.003 "method": "bdev_raid_add_base_bdev", 00:18:19.003 "req_id": 1 00:18:19.003 } 00:18:19.003 Got JSON-RPC error response 00:18:19.003 response: 00:18:19.003 { 00:18:19.003 "code": -22, 00:18:19.003 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:18:19.003 } 00:18:19.003 13:28:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:19.003 13:28:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@655 -- # es=1 00:18:19.003 13:28:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:19.003 13:28:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:19.003 13:28:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:19.003 13:28:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@779 -- # sleep 1 00:18:19.944 13:28:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:19.944 13:28:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:19.944 13:28:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:19.944 13:28:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:19.944 13:28:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:19.944 13:28:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:19.944 13:28:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:19.944 13:28:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:19.944 13:28:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:19.944 13:28:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:19.944 13:28:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:19.944 13:28:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:19.944 13:28:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.944 13:28:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:19.944 13:28:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.944 13:28:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:19.944 "name": "raid_bdev1", 00:18:19.944 "uuid": "a52b7208-41ce-477d-8264-ed2653626238", 00:18:19.944 "strip_size_kb": 0, 00:18:19.944 "state": "online", 00:18:19.944 "raid_level": "raid1", 00:18:19.944 "superblock": true, 00:18:19.944 "num_base_bdevs": 2, 00:18:19.944 "num_base_bdevs_discovered": 1, 00:18:19.944 "num_base_bdevs_operational": 1, 00:18:19.944 "base_bdevs_list": [ 00:18:19.944 { 00:18:19.944 "name": null, 00:18:19.944 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:19.944 "is_configured": false, 00:18:19.944 "data_offset": 0, 00:18:19.944 "data_size": 7936 00:18:19.944 }, 00:18:19.944 { 00:18:19.944 "name": "BaseBdev2", 00:18:19.944 "uuid": "09b2349b-c896-5166-91e6-d673b95fbe7e", 00:18:19.944 "is_configured": true, 00:18:19.944 "data_offset": 256, 00:18:19.944 "data_size": 7936 00:18:19.944 } 00:18:19.944 ] 00:18:19.944 }' 00:18:19.944 13:28:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:19.944 13:28:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:20.514 13:28:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:20.514 13:28:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:20.514 13:28:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:20.514 13:28:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:20.514 13:28:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:20.514 13:28:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:20.514 13:28:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.514 13:28:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:20.514 13:28:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:20.514 13:28:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.514 13:28:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:20.514 "name": "raid_bdev1", 00:18:20.514 "uuid": "a52b7208-41ce-477d-8264-ed2653626238", 00:18:20.514 "strip_size_kb": 0, 00:18:20.514 "state": "online", 00:18:20.514 "raid_level": "raid1", 00:18:20.514 "superblock": true, 00:18:20.514 "num_base_bdevs": 2, 00:18:20.514 "num_base_bdevs_discovered": 1, 00:18:20.514 "num_base_bdevs_operational": 1, 00:18:20.514 "base_bdevs_list": [ 00:18:20.514 { 00:18:20.514 "name": null, 00:18:20.514 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:20.514 "is_configured": false, 00:18:20.514 "data_offset": 0, 00:18:20.514 "data_size": 7936 00:18:20.514 }, 00:18:20.514 { 00:18:20.514 "name": "BaseBdev2", 00:18:20.514 "uuid": "09b2349b-c896-5166-91e6-d673b95fbe7e", 00:18:20.514 "is_configured": true, 00:18:20.514 "data_offset": 256, 00:18:20.514 "data_size": 7936 00:18:20.514 } 00:18:20.514 ] 00:18:20.514 }' 00:18:20.514 13:28:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:20.514 13:28:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:20.514 13:28:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:20.514 13:28:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:20.514 13:28:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@784 -- # killprocess 88889 00:18:20.514 13:28:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 88889 ']' 00:18:20.514 13:28:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 88889 00:18:20.514 13:28:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:18:20.514 13:28:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:20.514 13:28:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88889 00:18:20.514 13:28:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:20.514 killing process with pid 88889 00:18:20.514 Received shutdown signal, test time was about 60.000000 seconds 00:18:20.514 00:18:20.514 Latency(us) 00:18:20.514 [2024-11-17T13:28:09.738Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:20.514 [2024-11-17T13:28:09.738Z] =================================================================================================================== 00:18:20.514 [2024-11-17T13:28:09.738Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:20.514 13:28:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:20.514 13:28:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88889' 00:18:20.514 13:28:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@973 -- # kill 88889 00:18:20.514 [2024-11-17 13:28:09.696395] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:20.514 [2024-11-17 13:28:09.696533] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:20.514 13:28:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@978 -- # wait 88889 00:18:20.514 [2024-11-17 13:28:09.696586] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:20.514 [2024-11-17 13:28:09.696599] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:18:21.085 [2024-11-17 13:28:10.010742] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:22.031 13:28:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@786 -- # return 0 00:18:22.031 00:18:22.031 real 0m17.861s 00:18:22.031 user 0m23.364s 00:18:22.031 sys 0m1.848s 00:18:22.031 13:28:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:22.031 ************************************ 00:18:22.031 END TEST raid_rebuild_test_sb_md_interleaved 00:18:22.031 ************************************ 00:18:22.031 13:28:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:22.031 13:28:11 bdev_raid -- bdev/bdev_raid.sh@1015 -- # trap - EXIT 00:18:22.031 13:28:11 bdev_raid -- bdev/bdev_raid.sh@1016 -- # cleanup 00:18:22.031 13:28:11 bdev_raid -- bdev/bdev_raid.sh@56 -- # '[' -n 88889 ']' 00:18:22.031 13:28:11 bdev_raid -- bdev/bdev_raid.sh@56 -- # ps -p 88889 00:18:22.292 13:28:11 bdev_raid -- bdev/bdev_raid.sh@60 -- # rm -rf /raidtest 00:18:22.292 00:18:22.292 real 11m51.135s 00:18:22.292 user 15m57.243s 00:18:22.292 sys 1m53.084s 00:18:22.292 ************************************ 00:18:22.292 END TEST bdev_raid 00:18:22.292 13:28:11 bdev_raid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:22.292 13:28:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:22.292 ************************************ 00:18:22.292 13:28:11 -- spdk/autotest.sh@190 -- # run_test spdkcli_raid /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:18:22.292 13:28:11 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:22.292 13:28:11 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:22.292 13:28:11 -- common/autotest_common.sh@10 -- # set +x 00:18:22.292 ************************************ 00:18:22.292 START TEST spdkcli_raid 00:18:22.292 ************************************ 00:18:22.292 13:28:11 spdkcli_raid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:18:22.292 * Looking for test storage... 00:18:22.292 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:18:22.292 13:28:11 spdkcli_raid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:22.292 13:28:11 spdkcli_raid -- common/autotest_common.sh@1693 -- # lcov --version 00:18:22.292 13:28:11 spdkcli_raid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:22.553 13:28:11 spdkcli_raid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:22.553 13:28:11 spdkcli_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:22.553 13:28:11 spdkcli_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:22.553 13:28:11 spdkcli_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:22.553 13:28:11 spdkcli_raid -- scripts/common.sh@336 -- # IFS=.-: 00:18:22.553 13:28:11 spdkcli_raid -- scripts/common.sh@336 -- # read -ra ver1 00:18:22.553 13:28:11 spdkcli_raid -- scripts/common.sh@337 -- # IFS=.-: 00:18:22.553 13:28:11 spdkcli_raid -- scripts/common.sh@337 -- # read -ra ver2 00:18:22.553 13:28:11 spdkcli_raid -- scripts/common.sh@338 -- # local 'op=<' 00:18:22.553 13:28:11 spdkcli_raid -- scripts/common.sh@340 -- # ver1_l=2 00:18:22.553 13:28:11 spdkcli_raid -- scripts/common.sh@341 -- # ver2_l=1 00:18:22.553 13:28:11 spdkcli_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:22.553 13:28:11 spdkcli_raid -- scripts/common.sh@344 -- # case "$op" in 00:18:22.553 13:28:11 spdkcli_raid -- scripts/common.sh@345 -- # : 1 00:18:22.553 13:28:11 spdkcli_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:22.553 13:28:11 spdkcli_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:22.553 13:28:11 spdkcli_raid -- scripts/common.sh@365 -- # decimal 1 00:18:22.553 13:28:11 spdkcli_raid -- scripts/common.sh@353 -- # local d=1 00:18:22.553 13:28:11 spdkcli_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:22.553 13:28:11 spdkcli_raid -- scripts/common.sh@355 -- # echo 1 00:18:22.553 13:28:11 spdkcli_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:18:22.553 13:28:11 spdkcli_raid -- scripts/common.sh@366 -- # decimal 2 00:18:22.553 13:28:11 spdkcli_raid -- scripts/common.sh@353 -- # local d=2 00:18:22.553 13:28:11 spdkcli_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:22.553 13:28:11 spdkcli_raid -- scripts/common.sh@355 -- # echo 2 00:18:22.553 13:28:11 spdkcli_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:18:22.553 13:28:11 spdkcli_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:22.553 13:28:11 spdkcli_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:22.553 13:28:11 spdkcli_raid -- scripts/common.sh@368 -- # return 0 00:18:22.553 13:28:11 spdkcli_raid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:22.553 13:28:11 spdkcli_raid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:22.553 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:22.553 --rc genhtml_branch_coverage=1 00:18:22.553 --rc genhtml_function_coverage=1 00:18:22.553 --rc genhtml_legend=1 00:18:22.553 --rc geninfo_all_blocks=1 00:18:22.553 --rc geninfo_unexecuted_blocks=1 00:18:22.553 00:18:22.553 ' 00:18:22.553 13:28:11 spdkcli_raid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:22.553 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:22.553 --rc genhtml_branch_coverage=1 00:18:22.553 --rc genhtml_function_coverage=1 00:18:22.553 --rc genhtml_legend=1 00:18:22.553 --rc geninfo_all_blocks=1 00:18:22.553 --rc geninfo_unexecuted_blocks=1 00:18:22.553 00:18:22.553 ' 00:18:22.553 13:28:11 spdkcli_raid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:22.553 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:22.553 --rc genhtml_branch_coverage=1 00:18:22.553 --rc genhtml_function_coverage=1 00:18:22.553 --rc genhtml_legend=1 00:18:22.553 --rc geninfo_all_blocks=1 00:18:22.553 --rc geninfo_unexecuted_blocks=1 00:18:22.553 00:18:22.553 ' 00:18:22.553 13:28:11 spdkcli_raid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:22.553 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:22.553 --rc genhtml_branch_coverage=1 00:18:22.553 --rc genhtml_function_coverage=1 00:18:22.553 --rc genhtml_legend=1 00:18:22.553 --rc geninfo_all_blocks=1 00:18:22.553 --rc geninfo_unexecuted_blocks=1 00:18:22.553 00:18:22.553 ' 00:18:22.553 13:28:11 spdkcli_raid -- spdkcli/raid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:18:22.554 13:28:11 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:18:22.554 13:28:11 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:18:22.554 13:28:11 spdkcli_raid -- spdkcli/raid.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:18:22.554 13:28:11 spdkcli_raid -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:18:22.554 13:28:11 spdkcli_raid -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:18:22.554 13:28:11 spdkcli_raid -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:18:22.554 13:28:11 spdkcli_raid -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:18:22.554 13:28:11 spdkcli_raid -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:18:22.554 13:28:11 spdkcli_raid -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:18:22.554 13:28:11 spdkcli_raid -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:18:22.554 13:28:11 spdkcli_raid -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:18:22.554 13:28:11 spdkcli_raid -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:18:22.554 13:28:11 spdkcli_raid -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:18:22.554 13:28:11 spdkcli_raid -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:18:22.554 13:28:11 spdkcli_raid -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:18:22.554 13:28:11 spdkcli_raid -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:18:22.554 13:28:11 spdkcli_raid -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:18:22.554 13:28:11 spdkcli_raid -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:18:22.554 13:28:11 spdkcli_raid -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:18:22.554 13:28:11 spdkcli_raid -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:18:22.554 13:28:11 spdkcli_raid -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:18:22.554 13:28:11 spdkcli_raid -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:18:22.554 13:28:11 spdkcli_raid -- spdkcli/raid.sh@12 -- # MATCH_FILE=spdkcli_raid.test 00:18:22.554 13:28:11 spdkcli_raid -- spdkcli/raid.sh@13 -- # SPDKCLI_BRANCH=/bdevs 00:18:22.554 13:28:11 spdkcli_raid -- spdkcli/raid.sh@14 -- # dirname /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:18:22.554 13:28:11 spdkcli_raid -- spdkcli/raid.sh@14 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/spdkcli 00:18:22.554 13:28:11 spdkcli_raid -- spdkcli/raid.sh@14 -- # testdir=/home/vagrant/spdk_repo/spdk/test/spdkcli 00:18:22.554 13:28:11 spdkcli_raid -- spdkcli/raid.sh@15 -- # . /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:18:22.554 13:28:11 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:18:22.554 13:28:11 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:18:22.554 13:28:11 spdkcli_raid -- spdkcli/raid.sh@17 -- # trap cleanup EXIT 00:18:22.554 13:28:11 spdkcli_raid -- spdkcli/raid.sh@19 -- # timing_enter run_spdk_tgt 00:18:22.554 13:28:11 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:22.554 13:28:11 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:22.554 13:28:11 spdkcli_raid -- spdkcli/raid.sh@20 -- # run_spdk_tgt 00:18:22.554 13:28:11 spdkcli_raid -- spdkcli/common.sh@27 -- # spdk_tgt_pid=89571 00:18:22.554 13:28:11 spdkcli_raid -- spdkcli/common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:18:22.554 13:28:11 spdkcli_raid -- spdkcli/common.sh@28 -- # waitforlisten 89571 00:18:22.554 13:28:11 spdkcli_raid -- common/autotest_common.sh@835 -- # '[' -z 89571 ']' 00:18:22.554 13:28:11 spdkcli_raid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:22.554 13:28:11 spdkcli_raid -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:22.554 13:28:11 spdkcli_raid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:22.554 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:22.554 13:28:11 spdkcli_raid -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:22.554 13:28:11 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:22.554 [2024-11-17 13:28:11.701273] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:18:22.554 [2024-11-17 13:28:11.701458] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89571 ] 00:18:22.814 [2024-11-17 13:28:11.877036] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:22.814 [2024-11-17 13:28:12.012994] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:22.814 [2024-11-17 13:28:12.013027] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:24.195 13:28:12 spdkcli_raid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:24.195 13:28:12 spdkcli_raid -- common/autotest_common.sh@868 -- # return 0 00:18:24.195 13:28:13 spdkcli_raid -- spdkcli/raid.sh@21 -- # timing_exit run_spdk_tgt 00:18:24.195 13:28:13 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:24.195 13:28:13 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:24.195 13:28:13 spdkcli_raid -- spdkcli/raid.sh@23 -- # timing_enter spdkcli_create_malloc 00:18:24.195 13:28:13 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:24.195 13:28:13 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:24.195 13:28:13 spdkcli_raid -- spdkcli/raid.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 8 512 Malloc1'\'' '\''Malloc1'\'' True 00:18:24.195 '\''/bdevs/malloc create 8 512 Malloc2'\'' '\''Malloc2'\'' True 00:18:24.195 ' 00:18:25.577 Executing command: ['/bdevs/malloc create 8 512 Malloc1', 'Malloc1', True] 00:18:25.577 Executing command: ['/bdevs/malloc create 8 512 Malloc2', 'Malloc2', True] 00:18:25.577 13:28:14 spdkcli_raid -- spdkcli/raid.sh@27 -- # timing_exit spdkcli_create_malloc 00:18:25.577 13:28:14 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:25.577 13:28:14 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:25.577 13:28:14 spdkcli_raid -- spdkcli/raid.sh@29 -- # timing_enter spdkcli_create_raid 00:18:25.577 13:28:14 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:25.577 13:28:14 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:25.577 13:28:14 spdkcli_raid -- spdkcli/raid.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4'\'' '\''testraid'\'' True 00:18:25.577 ' 00:18:26.957 Executing command: ['/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4', 'testraid', True] 00:18:26.957 13:28:15 spdkcli_raid -- spdkcli/raid.sh@32 -- # timing_exit spdkcli_create_raid 00:18:26.957 13:28:15 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:26.957 13:28:15 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:26.957 13:28:15 spdkcli_raid -- spdkcli/raid.sh@34 -- # timing_enter spdkcli_check_match 00:18:26.957 13:28:15 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:26.957 13:28:15 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:26.957 13:28:15 spdkcli_raid -- spdkcli/raid.sh@35 -- # check_match 00:18:26.957 13:28:15 spdkcli_raid -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /bdevs 00:18:27.526 13:28:16 spdkcli_raid -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test.match 00:18:27.526 13:28:16 spdkcli_raid -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test 00:18:27.526 13:28:16 spdkcli_raid -- spdkcli/raid.sh@36 -- # timing_exit spdkcli_check_match 00:18:27.526 13:28:16 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:27.526 13:28:16 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:27.526 13:28:16 spdkcli_raid -- spdkcli/raid.sh@38 -- # timing_enter spdkcli_delete_raid 00:18:27.526 13:28:16 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:27.526 13:28:16 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:27.526 13:28:16 spdkcli_raid -- spdkcli/raid.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume delete testraid'\'' '\'''\'' True 00:18:27.526 ' 00:18:28.465 Executing command: ['/bdevs/raid_volume delete testraid', '', True] 00:18:28.465 13:28:17 spdkcli_raid -- spdkcli/raid.sh@41 -- # timing_exit spdkcli_delete_raid 00:18:28.465 13:28:17 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:28.465 13:28:17 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:28.465 13:28:17 spdkcli_raid -- spdkcli/raid.sh@43 -- # timing_enter spdkcli_delete_malloc 00:18:28.465 13:28:17 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:28.465 13:28:17 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:28.465 13:28:17 spdkcli_raid -- spdkcli/raid.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc delete Malloc1'\'' '\'''\'' True 00:18:28.465 '\''/bdevs/malloc delete Malloc2'\'' '\'''\'' True 00:18:28.465 ' 00:18:29.845 Executing command: ['/bdevs/malloc delete Malloc1', '', True] 00:18:29.845 Executing command: ['/bdevs/malloc delete Malloc2', '', True] 00:18:30.105 13:28:19 spdkcli_raid -- spdkcli/raid.sh@47 -- # timing_exit spdkcli_delete_malloc 00:18:30.105 13:28:19 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:30.105 13:28:19 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:30.105 13:28:19 spdkcli_raid -- spdkcli/raid.sh@49 -- # killprocess 89571 00:18:30.105 13:28:19 spdkcli_raid -- common/autotest_common.sh@954 -- # '[' -z 89571 ']' 00:18:30.105 13:28:19 spdkcli_raid -- common/autotest_common.sh@958 -- # kill -0 89571 00:18:30.105 13:28:19 spdkcli_raid -- common/autotest_common.sh@959 -- # uname 00:18:30.105 13:28:19 spdkcli_raid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:30.105 13:28:19 spdkcli_raid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89571 00:18:30.105 13:28:19 spdkcli_raid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:30.105 13:28:19 spdkcli_raid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:30.105 13:28:19 spdkcli_raid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89571' 00:18:30.105 killing process with pid 89571 00:18:30.105 13:28:19 spdkcli_raid -- common/autotest_common.sh@973 -- # kill 89571 00:18:30.105 13:28:19 spdkcli_raid -- common/autotest_common.sh@978 -- # wait 89571 00:18:32.644 13:28:21 spdkcli_raid -- spdkcli/raid.sh@1 -- # cleanup 00:18:32.644 13:28:21 spdkcli_raid -- spdkcli/common.sh@10 -- # '[' -n 89571 ']' 00:18:32.644 13:28:21 spdkcli_raid -- spdkcli/common.sh@11 -- # killprocess 89571 00:18:32.644 13:28:21 spdkcli_raid -- common/autotest_common.sh@954 -- # '[' -z 89571 ']' 00:18:32.644 13:28:21 spdkcli_raid -- common/autotest_common.sh@958 -- # kill -0 89571 00:18:32.644 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (89571) - No such process 00:18:32.644 Process with pid 89571 is not found 00:18:32.644 13:28:21 spdkcli_raid -- common/autotest_common.sh@981 -- # echo 'Process with pid 89571 is not found' 00:18:32.644 13:28:21 spdkcli_raid -- spdkcli/common.sh@13 -- # '[' -n '' ']' 00:18:32.644 13:28:21 spdkcli_raid -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:18:32.644 13:28:21 spdkcli_raid -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:18:32.644 13:28:21 spdkcli_raid -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_raid.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:18:32.644 ************************************ 00:18:32.644 END TEST spdkcli_raid 00:18:32.644 ************************************ 00:18:32.644 00:18:32.644 real 0m10.415s 00:18:32.644 user 0m21.114s 00:18:32.644 sys 0m1.403s 00:18:32.644 13:28:21 spdkcli_raid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:32.644 13:28:21 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:32.644 13:28:21 -- spdk/autotest.sh@191 -- # run_test blockdev_raid5f /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:18:32.644 13:28:21 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:32.644 13:28:21 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:32.644 13:28:21 -- common/autotest_common.sh@10 -- # set +x 00:18:32.644 ************************************ 00:18:32.644 START TEST blockdev_raid5f 00:18:32.644 ************************************ 00:18:32.644 13:28:21 blockdev_raid5f -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:18:32.905 * Looking for test storage... 00:18:32.905 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:18:32.905 13:28:21 blockdev_raid5f -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:32.905 13:28:21 blockdev_raid5f -- common/autotest_common.sh@1693 -- # lcov --version 00:18:32.905 13:28:21 blockdev_raid5f -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:32.905 13:28:22 blockdev_raid5f -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:32.905 13:28:22 blockdev_raid5f -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:32.905 13:28:22 blockdev_raid5f -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:32.905 13:28:22 blockdev_raid5f -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:32.905 13:28:22 blockdev_raid5f -- scripts/common.sh@336 -- # IFS=.-: 00:18:32.905 13:28:22 blockdev_raid5f -- scripts/common.sh@336 -- # read -ra ver1 00:18:32.905 13:28:22 blockdev_raid5f -- scripts/common.sh@337 -- # IFS=.-: 00:18:32.905 13:28:22 blockdev_raid5f -- scripts/common.sh@337 -- # read -ra ver2 00:18:32.905 13:28:22 blockdev_raid5f -- scripts/common.sh@338 -- # local 'op=<' 00:18:32.905 13:28:22 blockdev_raid5f -- scripts/common.sh@340 -- # ver1_l=2 00:18:32.905 13:28:22 blockdev_raid5f -- scripts/common.sh@341 -- # ver2_l=1 00:18:32.905 13:28:22 blockdev_raid5f -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:32.905 13:28:22 blockdev_raid5f -- scripts/common.sh@344 -- # case "$op" in 00:18:32.905 13:28:22 blockdev_raid5f -- scripts/common.sh@345 -- # : 1 00:18:32.905 13:28:22 blockdev_raid5f -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:32.905 13:28:22 blockdev_raid5f -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:32.905 13:28:22 blockdev_raid5f -- scripts/common.sh@365 -- # decimal 1 00:18:32.905 13:28:22 blockdev_raid5f -- scripts/common.sh@353 -- # local d=1 00:18:32.905 13:28:22 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:32.905 13:28:22 blockdev_raid5f -- scripts/common.sh@355 -- # echo 1 00:18:32.905 13:28:22 blockdev_raid5f -- scripts/common.sh@365 -- # ver1[v]=1 00:18:32.905 13:28:22 blockdev_raid5f -- scripts/common.sh@366 -- # decimal 2 00:18:32.905 13:28:22 blockdev_raid5f -- scripts/common.sh@353 -- # local d=2 00:18:32.905 13:28:22 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:32.905 13:28:22 blockdev_raid5f -- scripts/common.sh@355 -- # echo 2 00:18:32.905 13:28:22 blockdev_raid5f -- scripts/common.sh@366 -- # ver2[v]=2 00:18:32.905 13:28:22 blockdev_raid5f -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:32.905 13:28:22 blockdev_raid5f -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:32.905 13:28:22 blockdev_raid5f -- scripts/common.sh@368 -- # return 0 00:18:32.905 13:28:22 blockdev_raid5f -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:32.905 13:28:22 blockdev_raid5f -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:32.905 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:32.905 --rc genhtml_branch_coverage=1 00:18:32.905 --rc genhtml_function_coverage=1 00:18:32.905 --rc genhtml_legend=1 00:18:32.905 --rc geninfo_all_blocks=1 00:18:32.905 --rc geninfo_unexecuted_blocks=1 00:18:32.905 00:18:32.905 ' 00:18:32.905 13:28:22 blockdev_raid5f -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:32.905 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:32.905 --rc genhtml_branch_coverage=1 00:18:32.905 --rc genhtml_function_coverage=1 00:18:32.905 --rc genhtml_legend=1 00:18:32.905 --rc geninfo_all_blocks=1 00:18:32.905 --rc geninfo_unexecuted_blocks=1 00:18:32.905 00:18:32.905 ' 00:18:32.905 13:28:22 blockdev_raid5f -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:32.905 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:32.905 --rc genhtml_branch_coverage=1 00:18:32.906 --rc genhtml_function_coverage=1 00:18:32.906 --rc genhtml_legend=1 00:18:32.906 --rc geninfo_all_blocks=1 00:18:32.906 --rc geninfo_unexecuted_blocks=1 00:18:32.906 00:18:32.906 ' 00:18:32.906 13:28:22 blockdev_raid5f -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:32.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:32.906 --rc genhtml_branch_coverage=1 00:18:32.906 --rc genhtml_function_coverage=1 00:18:32.906 --rc genhtml_legend=1 00:18:32.906 --rc geninfo_all_blocks=1 00:18:32.906 --rc geninfo_unexecuted_blocks=1 00:18:32.906 00:18:32.906 ' 00:18:32.906 13:28:22 blockdev_raid5f -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:18:32.906 13:28:22 blockdev_raid5f -- bdev/nbd_common.sh@6 -- # set -e 00:18:32.906 13:28:22 blockdev_raid5f -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:18:32.906 13:28:22 blockdev_raid5f -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:18:32.906 13:28:22 blockdev_raid5f -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:18:32.906 13:28:22 blockdev_raid5f -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:18:32.906 13:28:22 blockdev_raid5f -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:18:32.906 13:28:22 blockdev_raid5f -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:18:32.906 13:28:22 blockdev_raid5f -- bdev/blockdev.sh@20 -- # : 00:18:32.906 13:28:22 blockdev_raid5f -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:18:32.906 13:28:22 blockdev_raid5f -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:18:32.906 13:28:22 blockdev_raid5f -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:18:32.906 13:28:22 blockdev_raid5f -- bdev/blockdev.sh@673 -- # uname -s 00:18:32.906 13:28:22 blockdev_raid5f -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:18:32.906 13:28:22 blockdev_raid5f -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:18:32.906 13:28:22 blockdev_raid5f -- bdev/blockdev.sh@681 -- # test_type=raid5f 00:18:32.906 13:28:22 blockdev_raid5f -- bdev/blockdev.sh@682 -- # crypto_device= 00:18:32.906 13:28:22 blockdev_raid5f -- bdev/blockdev.sh@683 -- # dek= 00:18:32.906 13:28:22 blockdev_raid5f -- bdev/blockdev.sh@684 -- # env_ctx= 00:18:32.906 13:28:22 blockdev_raid5f -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:18:32.906 13:28:22 blockdev_raid5f -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:18:32.906 13:28:22 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == bdev ]] 00:18:32.906 13:28:22 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == crypto_* ]] 00:18:32.906 13:28:22 blockdev_raid5f -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:18:32.906 13:28:22 blockdev_raid5f -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=89852 00:18:32.906 13:28:22 blockdev_raid5f -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:18:32.906 13:28:22 blockdev_raid5f -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:18:32.906 13:28:22 blockdev_raid5f -- bdev/blockdev.sh@49 -- # waitforlisten 89852 00:18:32.906 13:28:22 blockdev_raid5f -- common/autotest_common.sh@835 -- # '[' -z 89852 ']' 00:18:32.906 13:28:22 blockdev_raid5f -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:32.906 13:28:22 blockdev_raid5f -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:32.906 13:28:22 blockdev_raid5f -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:32.906 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:32.906 13:28:22 blockdev_raid5f -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:32.906 13:28:22 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:33.167 [2024-11-17 13:28:22.178581] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:18:33.167 [2024-11-17 13:28:22.178768] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89852 ] 00:18:33.167 [2024-11-17 13:28:22.349545] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:33.427 [2024-11-17 13:28:22.484364] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:34.367 13:28:23 blockdev_raid5f -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:34.367 13:28:23 blockdev_raid5f -- common/autotest_common.sh@868 -- # return 0 00:18:34.367 13:28:23 blockdev_raid5f -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:18:34.367 13:28:23 blockdev_raid5f -- bdev/blockdev.sh@725 -- # setup_raid5f_conf 00:18:34.367 13:28:23 blockdev_raid5f -- bdev/blockdev.sh@279 -- # rpc_cmd 00:18:34.367 13:28:23 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.367 13:28:23 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:34.367 Malloc0 00:18:34.367 Malloc1 00:18:34.367 Malloc2 00:18:34.367 13:28:23 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.367 13:28:23 blockdev_raid5f -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:18:34.367 13:28:23 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.367 13:28:23 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:34.628 13:28:23 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.628 13:28:23 blockdev_raid5f -- bdev/blockdev.sh@739 -- # cat 00:18:34.628 13:28:23 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:18:34.628 13:28:23 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.628 13:28:23 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:34.628 13:28:23 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.628 13:28:23 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:18:34.628 13:28:23 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.628 13:28:23 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:34.628 13:28:23 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.628 13:28:23 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:18:34.628 13:28:23 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.628 13:28:23 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:34.628 13:28:23 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.628 13:28:23 blockdev_raid5f -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:18:34.628 13:28:23 blockdev_raid5f -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:18:34.628 13:28:23 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.628 13:28:23 blockdev_raid5f -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:18:34.628 13:28:23 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:34.628 13:28:23 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.628 13:28:23 blockdev_raid5f -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:18:34.628 13:28:23 blockdev_raid5f -- bdev/blockdev.sh@748 -- # jq -r .name 00:18:34.628 13:28:23 blockdev_raid5f -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "3366ea3a-35de-4b4d-a676-cdd78b00e75f"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "3366ea3a-35de-4b4d-a676-cdd78b00e75f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "3366ea3a-35de-4b4d-a676-cdd78b00e75f",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "fe8bd7b4-9bdc-4b74-821f-3042d4a56159",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "25a576e6-97a2-420c-bc67-c92d4cb4db8f",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "2c2bd52a-3a26-4be7-89f4-a9b6c10daee4",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:18:34.628 13:28:23 blockdev_raid5f -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:18:34.628 13:28:23 blockdev_raid5f -- bdev/blockdev.sh@751 -- # hello_world_bdev=raid5f 00:18:34.628 13:28:23 blockdev_raid5f -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:18:34.628 13:28:23 blockdev_raid5f -- bdev/blockdev.sh@753 -- # killprocess 89852 00:18:34.628 13:28:23 blockdev_raid5f -- common/autotest_common.sh@954 -- # '[' -z 89852 ']' 00:18:34.628 13:28:23 blockdev_raid5f -- common/autotest_common.sh@958 -- # kill -0 89852 00:18:34.628 13:28:23 blockdev_raid5f -- common/autotest_common.sh@959 -- # uname 00:18:34.628 13:28:23 blockdev_raid5f -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:34.628 13:28:23 blockdev_raid5f -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89852 00:18:34.628 killing process with pid 89852 00:18:34.628 13:28:23 blockdev_raid5f -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:34.628 13:28:23 blockdev_raid5f -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:34.628 13:28:23 blockdev_raid5f -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89852' 00:18:34.628 13:28:23 blockdev_raid5f -- common/autotest_common.sh@973 -- # kill 89852 00:18:34.628 13:28:23 blockdev_raid5f -- common/autotest_common.sh@978 -- # wait 89852 00:18:37.926 13:28:26 blockdev_raid5f -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:18:37.926 13:28:26 blockdev_raid5f -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:18:37.926 13:28:26 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:18:37.926 13:28:26 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:37.926 13:28:26 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:37.926 ************************************ 00:18:37.926 START TEST bdev_hello_world 00:18:37.926 ************************************ 00:18:37.926 13:28:26 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:18:37.926 [2024-11-17 13:28:26.728203] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:18:37.926 [2024-11-17 13:28:26.728311] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89919 ] 00:18:37.926 [2024-11-17 13:28:26.898967] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:37.926 [2024-11-17 13:28:27.030776] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:38.496 [2024-11-17 13:28:27.641929] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:18:38.496 [2024-11-17 13:28:27.641988] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev raid5f 00:18:38.496 [2024-11-17 13:28:27.642006] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:18:38.496 [2024-11-17 13:28:27.642506] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:18:38.496 [2024-11-17 13:28:27.642646] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:18:38.496 [2024-11-17 13:28:27.642662] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:18:38.496 [2024-11-17 13:28:27.642708] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:18:38.496 00:18:38.496 [2024-11-17 13:28:27.642729] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:18:40.405 00:18:40.405 real 0m2.460s 00:18:40.405 user 0m1.988s 00:18:40.405 sys 0m0.347s 00:18:40.405 13:28:29 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:40.405 ************************************ 00:18:40.405 END TEST bdev_hello_world 00:18:40.405 ************************************ 00:18:40.405 13:28:29 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:18:40.405 13:28:29 blockdev_raid5f -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:18:40.405 13:28:29 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:40.405 13:28:29 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:40.405 13:28:29 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:40.405 ************************************ 00:18:40.405 START TEST bdev_bounds 00:18:40.405 ************************************ 00:18:40.405 Process bdevio pid: 89971 00:18:40.405 13:28:29 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:18:40.405 13:28:29 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=89971 00:18:40.405 13:28:29 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:18:40.405 13:28:29 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:18:40.405 13:28:29 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 89971' 00:18:40.405 13:28:29 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 89971 00:18:40.405 13:28:29 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 89971 ']' 00:18:40.405 13:28:29 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:40.405 13:28:29 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:40.405 13:28:29 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:40.405 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:40.405 13:28:29 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:40.405 13:28:29 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:18:40.405 [2024-11-17 13:28:29.265582] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:18:40.405 [2024-11-17 13:28:29.265810] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89971 ] 00:18:40.405 [2024-11-17 13:28:29.444693] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:40.405 [2024-11-17 13:28:29.582653] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:40.405 [2024-11-17 13:28:29.582789] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:40.405 [2024-11-17 13:28:29.582827] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:41.344 13:28:30 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:41.344 13:28:30 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:18:41.344 13:28:30 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:18:41.344 I/O targets: 00:18:41.344 raid5f: 131072 blocks of 512 bytes (64 MiB) 00:18:41.344 00:18:41.344 00:18:41.344 CUnit - A unit testing framework for C - Version 2.1-3 00:18:41.344 http://cunit.sourceforge.net/ 00:18:41.344 00:18:41.344 00:18:41.344 Suite: bdevio tests on: raid5f 00:18:41.344 Test: blockdev write read block ...passed 00:18:41.344 Test: blockdev write zeroes read block ...passed 00:18:41.344 Test: blockdev write zeroes read no split ...passed 00:18:41.344 Test: blockdev write zeroes read split ...passed 00:18:41.344 Test: blockdev write zeroes read split partial ...passed 00:18:41.344 Test: blockdev reset ...passed 00:18:41.344 Test: blockdev write read 8 blocks ...passed 00:18:41.344 Test: blockdev write read size > 128k ...passed 00:18:41.344 Test: blockdev write read invalid size ...passed 00:18:41.344 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:41.344 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:41.344 Test: blockdev write read max offset ...passed 00:18:41.344 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:41.344 Test: blockdev writev readv 8 blocks ...passed 00:18:41.344 Test: blockdev writev readv 30 x 1block ...passed 00:18:41.344 Test: blockdev writev readv block ...passed 00:18:41.344 Test: blockdev writev readv size > 128k ...passed 00:18:41.344 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:41.344 Test: blockdev comparev and writev ...passed 00:18:41.344 Test: blockdev nvme passthru rw ...passed 00:18:41.344 Test: blockdev nvme passthru vendor specific ...passed 00:18:41.344 Test: blockdev nvme admin passthru ...passed 00:18:41.344 Test: blockdev copy ...passed 00:18:41.344 00:18:41.344 Run Summary: Type Total Ran Passed Failed Inactive 00:18:41.344 suites 1 1 n/a 0 0 00:18:41.344 tests 23 23 23 0 0 00:18:41.344 asserts 130 130 130 0 n/a 00:18:41.344 00:18:41.344 Elapsed time = 0.571 seconds 00:18:41.344 0 00:18:41.344 13:28:30 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 89971 00:18:41.344 13:28:30 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 89971 ']' 00:18:41.344 13:28:30 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 89971 00:18:41.344 13:28:30 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:18:41.344 13:28:30 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:41.344 13:28:30 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89971 00:18:41.604 13:28:30 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:41.604 13:28:30 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:41.604 13:28:30 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89971' 00:18:41.604 killing process with pid 89971 00:18:41.604 13:28:30 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@973 -- # kill 89971 00:18:41.604 13:28:30 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@978 -- # wait 89971 00:18:42.986 ************************************ 00:18:42.986 END TEST bdev_bounds 00:18:42.986 ************************************ 00:18:42.986 13:28:31 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:18:42.986 00:18:42.986 real 0m2.733s 00:18:42.986 user 0m6.630s 00:18:42.986 sys 0m0.476s 00:18:42.986 13:28:31 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:42.987 13:28:31 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:18:42.987 13:28:31 blockdev_raid5f -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:18:42.987 13:28:31 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:18:42.987 13:28:31 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:42.987 13:28:31 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:42.987 ************************************ 00:18:42.987 START TEST bdev_nbd 00:18:42.987 ************************************ 00:18:42.987 13:28:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:18:42.987 13:28:31 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:18:42.987 13:28:31 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:18:42.987 13:28:31 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:42.987 13:28:31 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:18:42.987 13:28:31 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('raid5f') 00:18:42.987 13:28:31 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:18:42.987 13:28:31 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=1 00:18:42.987 13:28:31 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:18:42.987 13:28:31 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:18:42.987 13:28:31 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:18:42.987 13:28:31 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=1 00:18:42.987 13:28:31 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0') 00:18:42.987 13:28:31 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:18:42.987 13:28:31 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('raid5f') 00:18:42.987 13:28:31 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:18:42.987 13:28:31 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=90032 00:18:42.987 13:28:31 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:18:42.987 13:28:31 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:18:42.987 13:28:31 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 90032 /var/tmp/spdk-nbd.sock 00:18:42.987 13:28:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 90032 ']' 00:18:42.987 13:28:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:18:42.987 13:28:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:42.987 13:28:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:18:42.987 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:18:42.987 13:28:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:42.987 13:28:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:18:42.987 [2024-11-17 13:28:32.082769] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:18:42.987 [2024-11-17 13:28:32.082960] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:43.247 [2024-11-17 13:28:32.260004] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:43.247 [2024-11-17 13:28:32.367307] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:43.818 13:28:32 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:43.818 13:28:32 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:18:43.818 13:28:32 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock raid5f 00:18:43.818 13:28:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:43.818 13:28:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('raid5f') 00:18:43.818 13:28:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:18:43.818 13:28:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock raid5f 00:18:43.818 13:28:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:43.818 13:28:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('raid5f') 00:18:43.818 13:28:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:18:43.818 13:28:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:18:43.818 13:28:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:18:43.818 13:28:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:18:43.818 13:28:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:18:43.818 13:28:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f 00:18:44.083 13:28:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:18:44.083 13:28:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:18:44.083 13:28:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:18:44.083 13:28:33 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:44.083 13:28:33 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:18:44.083 13:28:33 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:44.083 13:28:33 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:44.083 13:28:33 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:44.083 13:28:33 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:18:44.083 13:28:33 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:44.084 13:28:33 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:44.084 13:28:33 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:44.084 1+0 records in 00:18:44.084 1+0 records out 00:18:44.084 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000451808 s, 9.1 MB/s 00:18:44.084 13:28:33 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:44.084 13:28:33 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:18:44.084 13:28:33 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:44.084 13:28:33 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:44.084 13:28:33 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:18:44.084 13:28:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:18:44.084 13:28:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:18:44.084 13:28:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:18:44.344 13:28:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:18:44.344 { 00:18:44.344 "nbd_device": "/dev/nbd0", 00:18:44.344 "bdev_name": "raid5f" 00:18:44.344 } 00:18:44.344 ]' 00:18:44.344 13:28:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:18:44.344 13:28:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:18:44.344 { 00:18:44.344 "nbd_device": "/dev/nbd0", 00:18:44.344 "bdev_name": "raid5f" 00:18:44.344 } 00:18:44.344 ]' 00:18:44.344 13:28:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:18:44.344 13:28:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:18:44.344 13:28:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:44.344 13:28:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:44.344 13:28:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:44.344 13:28:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:18:44.344 13:28:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:44.344 13:28:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:18:44.604 13:28:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:44.604 13:28:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:44.604 13:28:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:44.604 13:28:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:44.604 13:28:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:44.604 13:28:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:44.604 13:28:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:44.604 13:28:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:44.604 13:28:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:18:44.604 13:28:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:44.604 13:28:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:18:44.863 13:28:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:18:44.863 13:28:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:18:44.863 13:28:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:18:44.863 13:28:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:18:44.863 13:28:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:18:44.863 13:28:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:18:44.863 13:28:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:18:44.863 13:28:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:18:44.863 13:28:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:18:44.863 13:28:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:18:44.863 13:28:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:18:44.863 13:28:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:18:44.863 13:28:33 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:18:44.863 13:28:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:44.863 13:28:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('raid5f') 00:18:44.863 13:28:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:18:44.863 13:28:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:18:44.863 13:28:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:18:44.863 13:28:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:18:44.863 13:28:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:44.863 13:28:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('raid5f') 00:18:44.863 13:28:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:44.863 13:28:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:18:44.863 13:28:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:44.863 13:28:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:18:44.863 13:28:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:44.863 13:28:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:44.863 13:28:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f /dev/nbd0 00:18:45.123 /dev/nbd0 00:18:45.123 13:28:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:45.123 13:28:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:45.123 13:28:34 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:45.123 13:28:34 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:18:45.123 13:28:34 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:45.123 13:28:34 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:45.123 13:28:34 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:45.123 13:28:34 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:18:45.123 13:28:34 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:45.123 13:28:34 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:45.123 13:28:34 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:45.123 1+0 records in 00:18:45.123 1+0 records out 00:18:45.123 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000552105 s, 7.4 MB/s 00:18:45.123 13:28:34 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:45.123 13:28:34 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:18:45.123 13:28:34 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:45.123 13:28:34 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:45.123 13:28:34 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:18:45.123 13:28:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:45.123 13:28:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:45.123 13:28:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:18:45.123 13:28:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:45.123 13:28:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:18:45.382 13:28:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:18:45.382 { 00:18:45.382 "nbd_device": "/dev/nbd0", 00:18:45.382 "bdev_name": "raid5f" 00:18:45.382 } 00:18:45.382 ]' 00:18:45.382 13:28:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:18:45.382 { 00:18:45.382 "nbd_device": "/dev/nbd0", 00:18:45.382 "bdev_name": "raid5f" 00:18:45.382 } 00:18:45.382 ]' 00:18:45.382 13:28:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:18:45.382 13:28:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:18:45.382 13:28:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:18:45.382 13:28:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:18:45.382 13:28:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=1 00:18:45.382 13:28:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 1 00:18:45.382 13:28:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=1 00:18:45.382 13:28:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:18:45.382 13:28:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:18:45.382 13:28:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:18:45.382 13:28:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:18:45.382 13:28:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:18:45.382 13:28:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:18:45.382 13:28:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:18:45.382 13:28:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:18:45.382 256+0 records in 00:18:45.382 256+0 records out 00:18:45.382 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00435573 s, 241 MB/s 00:18:45.382 13:28:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:18:45.382 13:28:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:18:45.382 256+0 records in 00:18:45.382 256+0 records out 00:18:45.382 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.028335 s, 37.0 MB/s 00:18:45.382 13:28:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:18:45.382 13:28:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:18:45.382 13:28:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:18:45.382 13:28:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:18:45.382 13:28:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:18:45.382 13:28:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:18:45.382 13:28:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:18:45.382 13:28:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:18:45.382 13:28:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:18:45.382 13:28:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:18:45.382 13:28:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:18:45.382 13:28:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:45.382 13:28:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:45.382 13:28:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:45.382 13:28:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:18:45.382 13:28:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:45.382 13:28:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:18:45.641 13:28:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:45.641 13:28:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:45.641 13:28:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:45.641 13:28:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:45.641 13:28:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:45.641 13:28:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:45.641 13:28:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:45.641 13:28:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:45.641 13:28:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:18:45.641 13:28:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:45.641 13:28:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:18:45.900 13:28:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:18:45.900 13:28:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:18:45.900 13:28:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:18:45.900 13:28:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:18:45.900 13:28:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:18:45.900 13:28:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:18:45.900 13:28:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:18:45.900 13:28:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:18:45.900 13:28:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:18:45.900 13:28:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:18:45.900 13:28:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:18:45.900 13:28:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:18:45.900 13:28:34 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:18:45.900 13:28:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:45.900 13:28:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:18:45.900 13:28:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:18:46.160 malloc_lvol_verify 00:18:46.160 13:28:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:18:46.160 dd74a384-f07b-485d-af8f-8ed0c77f9bd1 00:18:46.160 13:28:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:18:46.419 5ba3e099-11b9-4b23-a5fc-74c83bc38390 00:18:46.419 13:28:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:18:46.680 /dev/nbd0 00:18:46.680 13:28:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:18:46.680 13:28:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:18:46.680 13:28:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:18:46.680 13:28:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:18:46.680 13:28:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:18:46.680 mke2fs 1.47.0 (5-Feb-2023) 00:18:46.680 Discarding device blocks: 0/4096 done 00:18:46.680 Creating filesystem with 4096 1k blocks and 1024 inodes 00:18:46.680 00:18:46.680 Allocating group tables: 0/1 done 00:18:46.680 Writing inode tables: 0/1 done 00:18:46.680 Creating journal (1024 blocks): done 00:18:46.680 Writing superblocks and filesystem accounting information: 0/1 done 00:18:46.680 00:18:46.680 13:28:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:18:46.680 13:28:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:46.680 13:28:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:46.680 13:28:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:46.680 13:28:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:18:46.680 13:28:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:46.680 13:28:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:18:46.939 13:28:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:46.939 13:28:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:46.939 13:28:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:46.939 13:28:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:46.939 13:28:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:46.939 13:28:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:46.939 13:28:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:46.939 13:28:35 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:46.939 13:28:35 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 90032 00:18:46.939 13:28:35 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 90032 ']' 00:18:46.939 13:28:35 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 90032 00:18:46.939 13:28:35 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:18:46.939 13:28:35 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:46.939 13:28:35 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90032 00:18:46.939 13:28:36 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:46.939 13:28:36 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:46.939 killing process with pid 90032 00:18:46.939 13:28:36 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90032' 00:18:46.939 13:28:36 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@973 -- # kill 90032 00:18:46.939 13:28:36 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@978 -- # wait 90032 00:18:48.321 13:28:37 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:18:48.321 00:18:48.321 real 0m5.387s 00:18:48.321 user 0m7.271s 00:18:48.321 sys 0m1.281s 00:18:48.321 13:28:37 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:48.321 13:28:37 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:18:48.321 ************************************ 00:18:48.321 END TEST bdev_nbd 00:18:48.321 ************************************ 00:18:48.321 13:28:37 blockdev_raid5f -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:18:48.321 13:28:37 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = nvme ']' 00:18:48.321 13:28:37 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = gpt ']' 00:18:48.321 13:28:37 blockdev_raid5f -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:18:48.321 13:28:37 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:48.321 13:28:37 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:48.321 13:28:37 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:48.321 ************************************ 00:18:48.321 START TEST bdev_fio 00:18:48.321 ************************************ 00:18:48.321 13:28:37 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:18:48.321 13:28:37 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:18:48.321 13:28:37 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:18:48.321 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:18:48.321 13:28:37 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:18:48.321 13:28:37 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:18:48.321 13:28:37 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:18:48.321 13:28:37 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:18:48.321 13:28:37 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:18:48.321 13:28:37 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:18:48.321 13:28:37 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:18:48.321 13:28:37 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:18:48.321 13:28:37 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:18:48.321 13:28:37 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:18:48.321 13:28:37 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:18:48.321 13:28:37 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:18:48.321 13:28:37 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:18:48.321 13:28:37 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:18:48.321 13:28:37 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:18:48.321 13:28:37 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:18:48.321 13:28:37 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:18:48.321 13:28:37 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:18:48.321 13:28:37 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:18:48.582 13:28:37 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:18:48.582 13:28:37 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:18:48.582 13:28:37 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:18:48.582 13:28:37 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid5f]' 00:18:48.582 13:28:37 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid5f 00:18:48.582 13:28:37 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:18:48.582 13:28:37 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:18:48.582 13:28:37 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:18:48.582 13:28:37 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:48.582 13:28:37 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:18:48.582 ************************************ 00:18:48.582 START TEST bdev_fio_rw_verify 00:18:48.582 ************************************ 00:18:48.582 13:28:37 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:18:48.582 13:28:37 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:18:48.582 13:28:37 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:48.582 13:28:37 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:48.582 13:28:37 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:48.582 13:28:37 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:48.582 13:28:37 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:18:48.582 13:28:37 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:48.582 13:28:37 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:48.582 13:28:37 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:48.582 13:28:37 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:18:48.582 13:28:37 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:48.582 13:28:37 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:48.582 13:28:37 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:48.582 13:28:37 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:18:48.582 13:28:37 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:48.582 13:28:37 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:18:48.842 job_raid5f: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:18:48.842 fio-3.35 00:18:48.842 Starting 1 thread 00:19:01.073 00:19:01.073 job_raid5f: (groupid=0, jobs=1): err= 0: pid=90231: Sun Nov 17 13:28:48 2024 00:19:01.073 read: IOPS=12.1k, BW=47.4MiB/s (49.7MB/s)(474MiB/10001msec) 00:19:01.073 slat (usec): min=17, max=1477, avg=19.44, stdev= 5.36 00:19:01.073 clat (usec): min=11, max=1087, avg=132.12, stdev=49.55 00:19:01.073 lat (usec): min=32, max=1692, avg=151.57, stdev=50.92 00:19:01.073 clat percentiles (usec): 00:19:01.073 | 50.000th=[ 137], 99.000th=[ 219], 99.900th=[ 408], 99.990th=[ 955], 00:19:01.073 | 99.999th=[ 1074] 00:19:01.073 write: IOPS=12.7k, BW=49.6MiB/s (52.0MB/s)(490MiB/9874msec); 0 zone resets 00:19:01.073 slat (usec): min=8, max=321, avg=16.72, stdev= 4.30 00:19:01.073 clat (usec): min=60, max=1304, avg=304.71, stdev=40.94 00:19:01.073 lat (usec): min=76, max=1505, avg=321.43, stdev=41.94 00:19:01.073 clat percentiles (usec): 00:19:01.073 | 50.000th=[ 310], 99.000th=[ 383], 99.900th=[ 553], 99.990th=[ 1106], 00:19:01.073 | 99.999th=[ 1270] 00:19:01.073 bw ( KiB/s): min=47312, max=53416, per=99.06%, avg=50306.95, stdev=1327.65, samples=19 00:19:01.073 iops : min=11828, max=13354, avg=12576.74, stdev=331.91, samples=19 00:19:01.073 lat (usec) : 20=0.01%, 50=0.01%, 100=16.00%, 250=37.73%, 500=46.16% 00:19:01.073 lat (usec) : 750=0.07%, 1000=0.03% 00:19:01.073 lat (msec) : 2=0.01% 00:19:01.073 cpu : usr=98.54%, sys=0.60%, ctx=23, majf=0, minf=9946 00:19:01.073 IO depths : 1=7.7%, 2=19.9%, 4=55.2%, 8=17.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:01.073 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:01.073 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:01.073 issued rwts: total=121355,125353,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:01.073 latency : target=0, window=0, percentile=100.00%, depth=8 00:19:01.073 00:19:01.073 Run status group 0 (all jobs): 00:19:01.073 READ: bw=47.4MiB/s (49.7MB/s), 47.4MiB/s-47.4MiB/s (49.7MB/s-49.7MB/s), io=474MiB (497MB), run=10001-10001msec 00:19:01.073 WRITE: bw=49.6MiB/s (52.0MB/s), 49.6MiB/s-49.6MiB/s (52.0MB/s-52.0MB/s), io=490MiB (513MB), run=9874-9874msec 00:19:01.073 ----------------------------------------------------- 00:19:01.073 Suppressions used: 00:19:01.073 count bytes template 00:19:01.073 1 7 /usr/src/fio/parse.c 00:19:01.073 246 23616 /usr/src/fio/iolog.c 00:19:01.073 1 8 libtcmalloc_minimal.so 00:19:01.073 1 904 libcrypto.so 00:19:01.073 ----------------------------------------------------- 00:19:01.073 00:19:01.073 00:19:01.073 real 0m12.693s 00:19:01.073 user 0m12.948s 00:19:01.073 sys 0m0.693s 00:19:01.073 13:28:50 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:01.073 13:28:50 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:19:01.073 ************************************ 00:19:01.073 END TEST bdev_fio_rw_verify 00:19:01.073 ************************************ 00:19:01.335 13:28:50 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:19:01.335 13:28:50 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:01.335 13:28:50 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:19:01.335 13:28:50 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:01.335 13:28:50 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:19:01.335 13:28:50 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:19:01.335 13:28:50 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:19:01.335 13:28:50 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:19:01.335 13:28:50 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:19:01.335 13:28:50 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:19:01.335 13:28:50 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:19:01.335 13:28:50 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:01.335 13:28:50 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:19:01.335 13:28:50 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:19:01.335 13:28:50 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:19:01.335 13:28:50 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:19:01.335 13:28:50 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "3366ea3a-35de-4b4d-a676-cdd78b00e75f"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "3366ea3a-35de-4b4d-a676-cdd78b00e75f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "3366ea3a-35de-4b4d-a676-cdd78b00e75f",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "fe8bd7b4-9bdc-4b74-821f-3042d4a56159",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "25a576e6-97a2-420c-bc67-c92d4cb4db8f",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "2c2bd52a-3a26-4be7-89f4-a9b6c10daee4",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:19:01.335 13:28:50 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:19:01.335 13:28:50 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:19:01.335 13:28:50 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:01.335 13:28:50 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:19:01.335 /home/vagrant/spdk_repo/spdk 00:19:01.335 13:28:50 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:19:01.335 13:28:50 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:19:01.335 00:19:01.335 real 0m12.967s 00:19:01.335 user 0m13.052s 00:19:01.335 sys 0m0.829s 00:19:01.335 13:28:50 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:01.335 13:28:50 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:19:01.335 ************************************ 00:19:01.335 END TEST bdev_fio 00:19:01.335 ************************************ 00:19:01.335 13:28:50 blockdev_raid5f -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:19:01.335 13:28:50 blockdev_raid5f -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:19:01.335 13:28:50 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:19:01.335 13:28:50 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:01.335 13:28:50 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:01.335 ************************************ 00:19:01.335 START TEST bdev_verify 00:19:01.335 ************************************ 00:19:01.335 13:28:50 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:19:01.596 [2024-11-17 13:28:50.577517] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:19:01.596 [2024-11-17 13:28:50.577639] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90395 ] 00:19:01.596 [2024-11-17 13:28:50.754106] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:01.856 [2024-11-17 13:28:50.870983] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:01.856 [2024-11-17 13:28:50.871020] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:02.426 Running I/O for 5 seconds... 00:19:04.305 10828.00 IOPS, 42.30 MiB/s [2024-11-17T13:28:54.467Z] 10842.00 IOPS, 42.35 MiB/s [2024-11-17T13:28:55.405Z] 10881.67 IOPS, 42.51 MiB/s [2024-11-17T13:28:56.785Z] 10901.50 IOPS, 42.58 MiB/s [2024-11-17T13:28:56.785Z] 10892.40 IOPS, 42.55 MiB/s 00:19:07.561 Latency(us) 00:19:07.561 [2024-11-17T13:28:56.785Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:07.561 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:07.561 Verification LBA range: start 0x0 length 0x2000 00:19:07.561 raid5f : 5.02 6600.60 25.78 0.00 0.00 29226.78 232.52 21864.41 00:19:07.561 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:07.561 Verification LBA range: start 0x2000 length 0x2000 00:19:07.561 raid5f : 5.02 4300.33 16.80 0.00 0.00 44797.60 237.89 30907.81 00:19:07.561 [2024-11-17T13:28:56.785Z] =================================================================================================================== 00:19:07.561 [2024-11-17T13:28:56.785Z] Total : 10900.93 42.58 0.00 0.00 35372.24 232.52 30907.81 00:19:08.943 00:19:08.943 real 0m7.381s 00:19:08.943 user 0m13.633s 00:19:08.943 sys 0m0.274s 00:19:08.943 13:28:57 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:08.943 13:28:57 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:19:08.943 ************************************ 00:19:08.943 END TEST bdev_verify 00:19:08.943 ************************************ 00:19:08.943 13:28:57 blockdev_raid5f -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:19:08.943 13:28:57 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:19:08.943 13:28:57 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:08.943 13:28:57 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:08.943 ************************************ 00:19:08.943 START TEST bdev_verify_big_io 00:19:08.943 ************************************ 00:19:08.943 13:28:57 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:19:08.943 [2024-11-17 13:28:58.041637] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:19:08.943 [2024-11-17 13:28:58.041792] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90493 ] 00:19:09.203 [2024-11-17 13:28:58.220877] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:09.203 [2024-11-17 13:28:58.360145] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:09.203 [2024-11-17 13:28:58.360166] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:09.771 Running I/O for 5 seconds... 00:19:12.113 633.00 IOPS, 39.56 MiB/s [2024-11-17T13:29:02.275Z] 760.00 IOPS, 47.50 MiB/s [2024-11-17T13:29:03.214Z] 761.33 IOPS, 47.58 MiB/s [2024-11-17T13:29:04.153Z] 777.00 IOPS, 48.56 MiB/s [2024-11-17T13:29:04.412Z] 761.60 IOPS, 47.60 MiB/s 00:19:15.188 Latency(us) 00:19:15.188 [2024-11-17T13:29:04.412Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:15.188 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:19:15.188 Verification LBA range: start 0x0 length 0x200 00:19:15.188 raid5f : 5.17 441.86 27.62 0.00 0.00 7252711.18 279.03 311367.55 00:19:15.188 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:19:15.188 Verification LBA range: start 0x200 length 0x200 00:19:15.188 raid5f : 5.20 341.65 21.35 0.00 0.00 9313206.48 209.27 391956.79 00:19:15.188 [2024-11-17T13:29:04.412Z] =================================================================================================================== 00:19:15.188 [2024-11-17T13:29:04.412Z] Total : 783.52 48.97 0.00 0.00 8154177.87 209.27 391956.79 00:19:16.568 00:19:16.568 real 0m7.705s 00:19:16.568 user 0m14.128s 00:19:16.568 sys 0m0.397s 00:19:16.568 13:29:05 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:16.568 13:29:05 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:19:16.568 ************************************ 00:19:16.568 END TEST bdev_verify_big_io 00:19:16.568 ************************************ 00:19:16.568 13:29:05 blockdev_raid5f -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:16.568 13:29:05 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:19:16.568 13:29:05 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:16.568 13:29:05 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:16.568 ************************************ 00:19:16.568 START TEST bdev_write_zeroes 00:19:16.568 ************************************ 00:19:16.568 13:29:05 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:16.828 [2024-11-17 13:29:05.827561] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:19:16.829 [2024-11-17 13:29:05.827687] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90587 ] 00:19:16.829 [2024-11-17 13:29:06.007428] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:17.089 [2024-11-17 13:29:06.137964] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:17.660 Running I/O for 1 seconds... 00:19:18.600 29175.00 IOPS, 113.96 MiB/s 00:19:18.600 Latency(us) 00:19:18.600 [2024-11-17T13:29:07.824Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:18.600 Job: raid5f (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:19:18.600 raid5f : 1.01 29154.43 113.88 0.00 0.00 4376.98 1574.01 6067.09 00:19:18.600 [2024-11-17T13:29:07.824Z] =================================================================================================================== 00:19:18.600 [2024-11-17T13:29:07.824Z] Total : 29154.43 113.88 0.00 0.00 4376.98 1574.01 6067.09 00:19:19.980 00:19:19.980 real 0m3.471s 00:19:19.980 user 0m2.967s 00:19:19.980 sys 0m0.370s 00:19:19.980 13:29:09 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:19.980 13:29:09 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:19:19.980 ************************************ 00:19:19.980 END TEST bdev_write_zeroes 00:19:19.980 ************************************ 00:19:20.240 13:29:09 blockdev_raid5f -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:20.240 13:29:09 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:19:20.241 13:29:09 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:20.241 13:29:09 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:20.241 ************************************ 00:19:20.241 START TEST bdev_json_nonenclosed 00:19:20.241 ************************************ 00:19:20.241 13:29:09 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:20.241 [2024-11-17 13:29:09.367121] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:19:20.241 [2024-11-17 13:29:09.367249] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90654 ] 00:19:20.500 [2024-11-17 13:29:09.544537] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:20.500 [2024-11-17 13:29:09.678975] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:20.500 [2024-11-17 13:29:09.679087] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:19:20.500 [2024-11-17 13:29:09.679120] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:19:20.500 [2024-11-17 13:29:09.679133] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:19:20.760 00:19:20.760 real 0m0.663s 00:19:20.760 user 0m0.410s 00:19:20.760 sys 0m0.148s 00:19:20.760 13:29:09 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:20.760 13:29:09 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:19:20.760 ************************************ 00:19:20.760 END TEST bdev_json_nonenclosed 00:19:20.760 ************************************ 00:19:21.020 13:29:10 blockdev_raid5f -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:21.020 13:29:10 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:19:21.020 13:29:10 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:21.020 13:29:10 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:21.020 ************************************ 00:19:21.020 START TEST bdev_json_nonarray 00:19:21.020 ************************************ 00:19:21.020 13:29:10 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:21.020 [2024-11-17 13:29:10.113960] Starting SPDK v25.01-pre git sha1 ca87521f7 / DPDK 24.03.0 initialization... 00:19:21.020 [2024-11-17 13:29:10.114100] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90681 ] 00:19:21.280 [2024-11-17 13:29:10.294205] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:21.280 [2024-11-17 13:29:10.429983] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:21.280 [2024-11-17 13:29:10.430114] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:19:21.280 [2024-11-17 13:29:10.430137] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:19:21.280 [2024-11-17 13:29:10.430160] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:19:21.540 00:19:21.540 real 0m0.669s 00:19:21.540 user 0m0.401s 00:19:21.540 sys 0m0.161s 00:19:21.541 13:29:10 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:21.541 13:29:10 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:19:21.541 ************************************ 00:19:21.541 END TEST bdev_json_nonarray 00:19:21.541 ************************************ 00:19:21.541 13:29:10 blockdev_raid5f -- bdev/blockdev.sh@786 -- # [[ raid5f == bdev ]] 00:19:21.541 13:29:10 blockdev_raid5f -- bdev/blockdev.sh@793 -- # [[ raid5f == gpt ]] 00:19:21.541 13:29:10 blockdev_raid5f -- bdev/blockdev.sh@797 -- # [[ raid5f == crypto_sw ]] 00:19:21.541 13:29:10 blockdev_raid5f -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:19:21.541 13:29:10 blockdev_raid5f -- bdev/blockdev.sh@810 -- # cleanup 00:19:21.541 13:29:10 blockdev_raid5f -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:19:21.541 13:29:10 blockdev_raid5f -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:19:21.541 13:29:10 blockdev_raid5f -- bdev/blockdev.sh@26 -- # [[ raid5f == rbd ]] 00:19:21.541 13:29:10 blockdev_raid5f -- bdev/blockdev.sh@30 -- # [[ raid5f == daos ]] 00:19:21.541 13:29:10 blockdev_raid5f -- bdev/blockdev.sh@34 -- # [[ raid5f = \g\p\t ]] 00:19:21.541 13:29:10 blockdev_raid5f -- bdev/blockdev.sh@40 -- # [[ raid5f == xnvme ]] 00:19:21.541 00:19:21.541 real 0m48.937s 00:19:21.541 user 1m5.124s 00:19:21.541 sys 0m5.626s 00:19:21.801 13:29:10 blockdev_raid5f -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:21.801 13:29:10 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:21.801 ************************************ 00:19:21.801 END TEST blockdev_raid5f 00:19:21.801 ************************************ 00:19:21.801 13:29:10 -- spdk/autotest.sh@194 -- # uname -s 00:19:21.801 13:29:10 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:19:21.801 13:29:10 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:19:21.801 13:29:10 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:19:21.801 13:29:10 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:19:21.801 13:29:10 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:19:21.801 13:29:10 -- spdk/autotest.sh@260 -- # timing_exit lib 00:19:21.801 13:29:10 -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:21.801 13:29:10 -- common/autotest_common.sh@10 -- # set +x 00:19:21.801 13:29:10 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:19:21.801 13:29:10 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:19:21.801 13:29:10 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:19:21.801 13:29:10 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:19:21.802 13:29:10 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:19:21.802 13:29:10 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:19:21.802 13:29:10 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:19:21.802 13:29:10 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:19:21.802 13:29:10 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:19:21.802 13:29:10 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:19:21.802 13:29:10 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:19:21.802 13:29:10 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:19:21.802 13:29:10 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:19:21.802 13:29:10 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:19:21.802 13:29:10 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:19:21.802 13:29:10 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:19:21.802 13:29:10 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:19:21.802 13:29:10 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:19:21.802 13:29:10 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:19:21.802 13:29:10 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:19:21.802 13:29:10 -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:21.802 13:29:10 -- common/autotest_common.sh@10 -- # set +x 00:19:21.802 13:29:10 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:19:21.802 13:29:10 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:19:21.802 13:29:10 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:19:21.802 13:29:10 -- common/autotest_common.sh@10 -- # set +x 00:19:24.352 INFO: APP EXITING 00:19:24.352 INFO: killing all VMs 00:19:24.352 INFO: killing vhost app 00:19:24.352 INFO: EXIT DONE 00:19:24.612 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:24.612 Waiting for block devices as requested 00:19:24.872 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:19:24.872 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:19:25.812 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:25.812 Cleaning 00:19:25.812 Removing: /var/run/dpdk/spdk0/config 00:19:25.812 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:19:25.812 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:19:25.812 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:19:25.812 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:19:25.812 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:19:25.812 Removing: /var/run/dpdk/spdk0/hugepage_info 00:19:25.812 Removing: /dev/shm/spdk_tgt_trace.pid56917 00:19:25.812 Removing: /var/run/dpdk/spdk0 00:19:25.812 Removing: /var/run/dpdk/spdk_pid56677 00:19:25.812 Removing: /var/run/dpdk/spdk_pid56917 00:19:25.812 Removing: /var/run/dpdk/spdk_pid57146 00:19:25.812 Removing: /var/run/dpdk/spdk_pid57256 00:19:25.812 Removing: /var/run/dpdk/spdk_pid57301 00:19:25.812 Removing: /var/run/dpdk/spdk_pid57440 00:19:25.812 Removing: /var/run/dpdk/spdk_pid57458 00:19:25.812 Removing: /var/run/dpdk/spdk_pid57668 00:19:25.812 Removing: /var/run/dpdk/spdk_pid57780 00:19:25.812 Removing: /var/run/dpdk/spdk_pid57887 00:19:25.812 Removing: /var/run/dpdk/spdk_pid58009 00:19:25.812 Removing: /var/run/dpdk/spdk_pid58117 00:19:25.812 Removing: /var/run/dpdk/spdk_pid58156 00:19:25.812 Removing: /var/run/dpdk/spdk_pid58193 00:19:26.072 Removing: /var/run/dpdk/spdk_pid58269 00:19:26.072 Removing: /var/run/dpdk/spdk_pid58386 00:19:26.072 Removing: /var/run/dpdk/spdk_pid58833 00:19:26.072 Removing: /var/run/dpdk/spdk_pid58908 00:19:26.072 Removing: /var/run/dpdk/spdk_pid58982 00:19:26.072 Removing: /var/run/dpdk/spdk_pid59004 00:19:26.072 Removing: /var/run/dpdk/spdk_pid59154 00:19:26.072 Removing: /var/run/dpdk/spdk_pid59170 00:19:26.072 Removing: /var/run/dpdk/spdk_pid59321 00:19:26.072 Removing: /var/run/dpdk/spdk_pid59338 00:19:26.072 Removing: /var/run/dpdk/spdk_pid59407 00:19:26.072 Removing: /var/run/dpdk/spdk_pid59429 00:19:26.072 Removing: /var/run/dpdk/spdk_pid59494 00:19:26.072 Removing: /var/run/dpdk/spdk_pid59518 00:19:26.072 Removing: /var/run/dpdk/spdk_pid59713 00:19:26.072 Removing: /var/run/dpdk/spdk_pid59744 00:19:26.072 Removing: /var/run/dpdk/spdk_pid59833 00:19:26.072 Removing: /var/run/dpdk/spdk_pid61175 00:19:26.072 Removing: /var/run/dpdk/spdk_pid61382 00:19:26.072 Removing: /var/run/dpdk/spdk_pid61528 00:19:26.072 Removing: /var/run/dpdk/spdk_pid62166 00:19:26.072 Removing: /var/run/dpdk/spdk_pid62372 00:19:26.072 Removing: /var/run/dpdk/spdk_pid62512 00:19:26.073 Removing: /var/run/dpdk/spdk_pid63155 00:19:26.073 Removing: /var/run/dpdk/spdk_pid63480 00:19:26.073 Removing: /var/run/dpdk/spdk_pid63624 00:19:26.073 Removing: /var/run/dpdk/spdk_pid65005 00:19:26.073 Removing: /var/run/dpdk/spdk_pid65259 00:19:26.073 Removing: /var/run/dpdk/spdk_pid65399 00:19:26.073 Removing: /var/run/dpdk/spdk_pid66783 00:19:26.073 Removing: /var/run/dpdk/spdk_pid67032 00:19:26.073 Removing: /var/run/dpdk/spdk_pid67182 00:19:26.073 Removing: /var/run/dpdk/spdk_pid68568 00:19:26.073 Removing: /var/run/dpdk/spdk_pid69014 00:19:26.073 Removing: /var/run/dpdk/spdk_pid69154 00:19:26.073 Removing: /var/run/dpdk/spdk_pid70643 00:19:26.073 Removing: /var/run/dpdk/spdk_pid70907 00:19:26.073 Removing: /var/run/dpdk/spdk_pid71049 00:19:26.073 Removing: /var/run/dpdk/spdk_pid72534 00:19:26.073 Removing: /var/run/dpdk/spdk_pid72800 00:19:26.073 Removing: /var/run/dpdk/spdk_pid72945 00:19:26.073 Removing: /var/run/dpdk/spdk_pid74425 00:19:26.073 Removing: /var/run/dpdk/spdk_pid74918 00:19:26.073 Removing: /var/run/dpdk/spdk_pid75058 00:19:26.073 Removing: /var/run/dpdk/spdk_pid75207 00:19:26.073 Removing: /var/run/dpdk/spdk_pid75642 00:19:26.073 Removing: /var/run/dpdk/spdk_pid76361 00:19:26.073 Removing: /var/run/dpdk/spdk_pid76750 00:19:26.073 Removing: /var/run/dpdk/spdk_pid77439 00:19:26.073 Removing: /var/run/dpdk/spdk_pid77887 00:19:26.073 Removing: /var/run/dpdk/spdk_pid78635 00:19:26.073 Removing: /var/run/dpdk/spdk_pid79065 00:19:26.073 Removing: /var/run/dpdk/spdk_pid81012 00:19:26.073 Removing: /var/run/dpdk/spdk_pid81455 00:19:26.073 Removing: /var/run/dpdk/spdk_pid81885 00:19:26.333 Removing: /var/run/dpdk/spdk_pid83971 00:19:26.333 Removing: /var/run/dpdk/spdk_pid84454 00:19:26.333 Removing: /var/run/dpdk/spdk_pid84980 00:19:26.333 Removing: /var/run/dpdk/spdk_pid86038 00:19:26.333 Removing: /var/run/dpdk/spdk_pid86365 00:19:26.333 Removing: /var/run/dpdk/spdk_pid87298 00:19:26.333 Removing: /var/run/dpdk/spdk_pid87626 00:19:26.333 Removing: /var/run/dpdk/spdk_pid88566 00:19:26.333 Removing: /var/run/dpdk/spdk_pid88889 00:19:26.333 Removing: /var/run/dpdk/spdk_pid89571 00:19:26.333 Removing: /var/run/dpdk/spdk_pid89852 00:19:26.333 Removing: /var/run/dpdk/spdk_pid89919 00:19:26.333 Removing: /var/run/dpdk/spdk_pid89971 00:19:26.333 Removing: /var/run/dpdk/spdk_pid90216 00:19:26.333 Removing: /var/run/dpdk/spdk_pid90395 00:19:26.333 Removing: /var/run/dpdk/spdk_pid90493 00:19:26.333 Removing: /var/run/dpdk/spdk_pid90587 00:19:26.333 Removing: /var/run/dpdk/spdk_pid90654 00:19:26.333 Removing: /var/run/dpdk/spdk_pid90681 00:19:26.333 Clean 00:19:26.333 13:29:15 -- common/autotest_common.sh@1453 -- # return 0 00:19:26.333 13:29:15 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:19:26.333 13:29:15 -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:26.333 13:29:15 -- common/autotest_common.sh@10 -- # set +x 00:19:26.333 13:29:15 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:19:26.333 13:29:15 -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:26.333 13:29:15 -- common/autotest_common.sh@10 -- # set +x 00:19:26.593 13:29:15 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:19:26.593 13:29:15 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:19:26.593 13:29:15 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:19:26.593 13:29:15 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:19:26.593 13:29:15 -- spdk/autotest.sh@398 -- # hostname 00:19:26.593 13:29:15 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:19:26.593 geninfo: WARNING: invalid characters removed from testname! 00:19:53.227 13:29:40 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:19:54.168 13:29:43 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:19:56.077 13:29:45 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:19:57.984 13:29:47 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:00.527 13:29:49 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:02.437 13:29:51 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:04.347 13:29:53 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:20:04.347 13:29:53 -- spdk/autorun.sh@1 -- $ timing_finish 00:20:04.347 13:29:53 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:20:04.347 13:29:53 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:20:04.347 13:29:53 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:20:04.347 13:29:53 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:20:04.347 + [[ -n 5428 ]] 00:20:04.347 + sudo kill 5428 00:20:04.357 [Pipeline] } 00:20:04.372 [Pipeline] // timeout 00:20:04.377 [Pipeline] } 00:20:04.394 [Pipeline] // stage 00:20:04.401 [Pipeline] } 00:20:04.417 [Pipeline] // catchError 00:20:04.428 [Pipeline] stage 00:20:04.431 [Pipeline] { (Stop VM) 00:20:04.444 [Pipeline] sh 00:20:04.727 + vagrant halt 00:20:07.266 ==> default: Halting domain... 00:20:15.421 [Pipeline] sh 00:20:15.704 + vagrant destroy -f 00:20:18.274 ==> default: Removing domain... 00:20:18.294 [Pipeline] sh 00:20:18.571 + mv output /var/jenkins/workspace/raid-vg-autotest_2/output 00:20:18.581 [Pipeline] } 00:20:18.595 [Pipeline] // stage 00:20:18.600 [Pipeline] } 00:20:18.614 [Pipeline] // dir 00:20:18.619 [Pipeline] } 00:20:18.632 [Pipeline] // wrap 00:20:18.638 [Pipeline] } 00:20:18.647 [Pipeline] // catchError 00:20:18.656 [Pipeline] stage 00:20:18.659 [Pipeline] { (Epilogue) 00:20:18.672 [Pipeline] sh 00:20:18.957 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:20:23.171 [Pipeline] catchError 00:20:23.173 [Pipeline] { 00:20:23.186 [Pipeline] sh 00:20:23.472 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:20:23.472 Artifacts sizes are good 00:20:23.482 [Pipeline] } 00:20:23.496 [Pipeline] // catchError 00:20:23.508 [Pipeline] archiveArtifacts 00:20:23.516 Archiving artifacts 00:20:23.619 [Pipeline] cleanWs 00:20:23.631 [WS-CLEANUP] Deleting project workspace... 00:20:23.631 [WS-CLEANUP] Deferred wipeout is used... 00:20:23.638 [WS-CLEANUP] done 00:20:23.640 [Pipeline] } 00:20:23.655 [Pipeline] // stage 00:20:23.660 [Pipeline] } 00:20:23.674 [Pipeline] // node 00:20:23.679 [Pipeline] End of Pipeline 00:20:23.718 Finished: SUCCESS